Show HN: ArchGW – An open-source intelligent proxy server for prompts
github.comHi HN! This is Salman, Adil, Shuguang and Co working on ArchGW[1] - an open-source lightweight proxy server for prompts - written in Rust and built on top of Envoy[2]. Arch moves the critical but pesky handling and processing of prompts: task understanding, prompt routing, safety, and observability - outside business logic. Its an edge and egress proxy for agentic apps.
We've talked to 100s of developers at places like Twilio, GE Healthcare, Redhat, Square, etc and there was a consistent theme in building AI apps: to move past a nascent demo they are left to their own devices in building out middle ware capabilities so that developers can move faster and ship with confidence.
Today, the approach to building an enterprise-ready AI app is cobbling together a large set of mono-functional tools, adding LLM-based preprocessing steps to determine safety (e.g. applying governance and guardrails), ask clarifying questions to improve task performance, support common agentic operations by packaging and managing function calling scenarios manually, etc. Not to mention, all the undifferentiated work in incorporating different LLM models and versions, and managing resiliency, retries and fallback logic.
ArchGW was built with the belief that prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization – outside business logic. We help built Envoy while at Lyft and think its offers a great foundation to build a proxy to manage traffic for prompts.
Here are some additional details about the open source project. ArchGW is written in rust, and the request path has three main parts:
* Listener subsystem which handles downstream (ingress) and upstream (egress) request processing.
* Prompt handler subsystem. This is where ArchGW makes decisions on the safety of the incoming request via its prompt_guard primitive and identifies where to forward the conversation to via its prompt_target primitive.
* Model serving subsystem is the interface that hosts all the lightweight LLMs[3] engineered in ArchGW and offers a framework for things like hallucination detection of our these models
We loved building this open source project, and our belief is that this infrastructure primitive would help developers build faster, safer and more personalized agents without all the manual prompt engineering and systems integration work needed to get there. We hope to invite other developers to use and improve Arch. Please give it a shot and leave feedback here, or at our discord channel [4]
Also here is a quick demo of the project in action [5]. You can check out our public docs here at [6]. Our models are also available here [7].
[1] https://github.com/katanemo/archgw
[2] https://www.envoyproxy.io/
[3] https://huggingface.co/collections/katanemo/arch-function-66...
[4] https://discord.com/channels/1292630766827737088/12926307682...
Hi - this is Adil the co-founder who developed archgw. We are working tirelessly to create a framework that would help developers write agentic application without having to write all the crufty/boilerplate code. At the very minimum we provide observability and logging without adding much overhead. You can simple plug arch gateway into your existing LLM application and you'd start seeing details like time-to-first-token, total latency, token count and tons of other observability details. I do recommend start tinkering with our getting started page here [1]
And for a bit more advanced use cases I do recommend looking at llm_routing [2] demo and currency_exchange demo [3].
We currently support providing seamless interface to major providers like openai, mistral, deepseek and also support hooking up to local providers like ollma [4]
[1] - https://github.com/katanemo/archgw?tab=readme-ov-file#quicks...
[2] - https://github.com/katanemo/archgw/tree/main/demos/use_cases...
[3] - https://github.com/katanemo/archgw/tree/main/demos/samples_p...
[4] - https://github.com/katanemo/archgw/tree/main/demos/use_cases...
I think I saw this a few months ago, but never followed up. Why train your own models? Aren't you better off using GPT or something like that to handle the tasks Arch uses specialized models for?
Speed. And separately, instruction fine-tuning an LLM for a specialized task like function calling or guardrails == better performance. Even Anthropic and other model providers suggest you separate tasks for LLMs to improve overall user experience
We happen to take those tasks that are non-business or domain specific related and trained our models to offer SOTA performance for 1/10th the cost and 10x the speed. For e.g. Arch-Function can process 5k/tokens per sec
How do you compare with https://github.com/comet-ml/opik in observability?
Opik is an evaluation tool first. Arch is a proxy server built on top of Envoy so it borrows from a very robust observability source. They both are complimentary in many ways
Envoy is compatible with OTel out of the box. That's a big plus for observability. Plus Envoy is designed for high-load dataplane (in the request path worklaods) and used in every modern stack. There are several advantages on using Arch as the source of observability (traces, metrics, logs)
Woud love feedback. See if it is useful, or what adaptations would make it useful.