Hey HN,
My co-founders and I have been building GridTravel, a free iOS app for planning and sharing travel routes with turn-by-turn GPS nav. We just launched yesterday after App Store approval.
We're three 21-year-old cofounders and best friends since middle school. We built GridTravel after years of frustration navigating new cities on every trip we took together.
The idea: most people either search Google for "top 10 places to visit in…" lists or go on social media to get inspiration on where to go. GridTravel is built around user-generated routes — actual paths someone walked, that you can follow, save, download, and discover from other travelers. Users also have the ability to create private routes and collaborate with their friends.
Tech stack: Mapbox (Nav SDK + maps), Supabase (auth, DB, storage), and Swift. Native iOS for now, Android coming soon.
Our two real cost drivers are Mapbox Search (hit when users create routes) and Mapbox Navigation (hit when users use live navigation). Both have free tiers, then scale with MAU. We launched fully free to remove the barrier to entry. Revisiting pricing in Year 2 once nav costs start burning a hole in our pocket.
Current state: we're in the UGC cold-start hole. The app's value scales with route density in a given city, but route density requires users, who require routes. Classic chicken and egg. Our current plan: 1. Manually seed 25–30 routes per city, starting with 5-10 priority cities where we have personal networks rather than spreading ourselves thin. 2. Short-form content as the primary social channel (TikTok, reels, shorts). Doing A/B testing: whether route walkthroughs convert better than informational/skit videos. 3. Partnering with micro-influencers in those cities (5k-50k following) for in-app routes plus cross-posts on their channels
Curious what HN thinks. Especially anyone who's shipped a UGC product. What worked for you on cold start? What do you wish you'd done differently? Happy to answer any questions about the app, costs, etc.
App link: https://apps.apple.com/us/app/gridtravel-local-routes/id6762...
Hey HN, Henry here from Cactus. We open-sourced Needle, a 26M parameter function-calling (tool use) model. It runs at 6000 tok/s prefill and 1200 tok/s decode on consumer devices.
We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led to an observation: agentic experiences are built upon tool calling, and massive models are overkill for it. Tool calling is fundamentally retrieval-and-assembly (match query to tool name, extract argument values, emit JSON), not reasoning. Cross-attention is the right primitive for this, and FFN parameters are wasted at this scale.
Simple Attention Networks: the entire model is just attention and gating, no MLPs anywhere. Needle is an experimental run for single-shot function calling for consumer devices (phones, watches, glasses...).
Training: - Pretrained on 200B tokens across 16 TPU v6e (27 hours) - Post-trained on 2B tokens of synthesized function-calling data (45 minutes) - Dataset synthesized via Gemini with 15 tool categories (timers, messaging, navigation, smart home, etc.)
You can test it right now and finetune on your Mac/PC: https://github.com/cactus-compute/needle
The full writeup on the architecture is here: https://github.com/cactus-compute/needle/blob/main/docs/simp...
We found that the "no FFN" finding generalizes beyond function calling to any task where the model has access to external structured knowledge (RAG, tool use, retrieval-augmented generation). The model doesn't need to memorize facts in FFN weights if the facts are provided in the input. Experimental results to published.
While it beats FunctionGemma-270M, Qwen-0.6B, Granite-350M, LFM2.5-350M on single-shot function calling, those models have more scope/capacity and excel in conversational settings. We encourage you to test on your own tools via the playground and finetune accordingly.
This is part of our broader work on Cactus (https://github.com/cactus-compute/cactus), an inference engine built from scratch for mobile, wearables and custom hardware. We wrote about Cactus here previously: https://news.ycombinator.com/item?id=44524544
Everything is MIT licensed. Weights: https://huggingface.co/Cactus-Compute/needle GitHub: https://github.com/cactus-compute/needle
Every privacy-focused DNS service requires an account: NextDNS, Cloudflare for Families, Apple's iCloud Private Relay (paid, iOS-only). The protocol that doesn’t require one - ODoH - had basically one well-known public relay operator (Frank Denis on Fastly Compute, default in dnscrypt-proxy). I built a second one and the client to talk to it.
A few months ago I stumbled on obra's superpowers repository https://github.com/obra/superpowers. I really liked the approach and idea that you enforce discipline for your agent through a skill-based workflow. Even though coding agents (copilot included) have become a lot better at natively handling complex tasks, they still wander off and lose track of things. I really liked how superpowers fixed this and how it enabled long-running sessions without the agent losing its "focus". So I decided to build a Copilot tailored skill suite around the core idea of superpowers. I didn't just want to port superpowers to Copilot, I took inspiration from it and improved on it. JDS enforces a strict think -> plan -> execute pipeline where nothing gets skipped. It leverages Copilot's built-in sql todo dependencies and provides a live task graph visualizer which helps visualize the agentic workflow and its parallelism. Curious whether others have tried similar approaches, and what's worked or not.
An attempt at a single pass LLVM frontend in ~3000 lines of C without external dependencies, malloc, or an AST. Included are some graphical examples. The IR isn't perfect, and the README touches on one particular downfall
by namanyayg ·
Hi HN, I’m Namanyay from Gigacatalyst (link: https://gigacatalyst.com/). Gigacatalyst allows sales, CS, and users to build one-off features, so your SaaS can support long-tail customer workflows and engineers aren’t pulled away from the roadmap.
When you sell software to large businesses, you realize that each customer needs their own workflow and features. Traditionally, this either means long engineering roadmaps or the customers end up using workarounds.
But what if everyone could build their critical missing features just by talking to an AI? That’s what we do at Gigacatalyst. We provide an AI customization layer for your customers, CS team, and sales team to build these missing critical workflows without needing any engineers at all. Think Lovable, but built on top of YOUR platform.
We connect to your product's APIs, learn your data model and design system, and let non-technical users build governed apps via natural language - inside your product, under your brand.
Here’s what it looks like in action: https://www.youtube.com/watch?v=_taSpSphH6E
One of our customers, a Series B company, saw their users (not engineers - managers, ops people, facility directors) build critical workflows like:
- Parts stockout prevention: A maintenance manager typed "show me which parts will run out in the next 2 weeks based on usage over the last 90 days, accounting for vendor lead times." The app tracks consumption velocity, forecasts stockouts, and alerts before it's too late. He says it's prevented ~$500K in emergency downtime.
- Invoice OCR from phone photos: Technicians kept losing paper invoices. The prompt: "upload a photo of the invoice, extract vendor name, date, amount, and line items, then match it to the purchase order and flag discrepancies." Now techs snap a photo on-site to automatically add to the system of record.
- Restaurant emergency triage: A pizza chain's facilities manager was drowning in maintenance requests. He built a priority matrix: "walk-in freezer not cooling" auto-routes as CRITICAL, "dining room light flickering" goes to LOW. He's now able to manage backlogs with the correct priority.
How Gigacatalyst works under the hood:
1. Agentic API discovery: Our agents go through your app and parse your endpoints, query params, request/response shapes, and sample data to build the base layer.
2. Generation and Validation: When a user describes what they want our AI generates an app. We set up multiple validation steps, including static checks, runtime error analysis, and LLM-as-a-judge.
3. Sandboxing and Compilation: We wrote our own compilation and sandboxing framework to get the fastest speeds and lowest costs. This means that users can interact with the built app in seconds.
4. Proxy layer: We create a proxy layer for all APIs to handle auth, tenant isolation, and rate limiting. Everything the agent has access to is controlled, logged, observed, and version controlled.
After 2000+ daily users, 900+ apps built, and 70% 30-day retention, today we're opening a public demo.
Try it: https://app.gigacatalyst.com/ - enter your SaaS product's API URL (or just the homepage) and start prompting.
If you're serving a variety of use cases, you probably deal with a lot of custom requests and Gigacatalyst will save you time and increase your bottom line. Book a meeting at https://gigacatalyst.com/#contact and I'll help your team and customers build new functionality on top of your platform.
I've been reading Hacker News since I was 12 years old. I'm proud to launch for all of you and I want to hear your feedback on my product and comments!
Most AI chat applications (such as ChatGPT or Claude) stream their responses to the client as markdown text. As each new chunk of text arrives, the front end typically re-parses the entire markdown document to render the updated message. This works, but it can quickly slow down the UI for long responses.
I’ve been obsessing over ways to make this more efficient, so I wrote a markdown parser that can parse streaming markdown (semi) incrementally. Instead of re-processing the whole document each time, it only parses what’s new, processing each line only once. Block‑level nodes are buffered until they’re complete (for example, once a paragraph is done and won’t be extended by more text). This also makes parsing the markdown on server possible. The main demo does exactly that. Additionally, animating markdown blocks becomes much simpler and efficient, as a result.
Here’s a demo if you’d like to see it in action: https://markdownparser.vercel.app/experimental
Feel free to type 'Render a table with 10 rows' to see each table row animate in.
I’ve spent a lot of time thinking about this problem, so if you’re working on similar issues, I'd love to chat.