Meet 'Babel': An AI-powered Solution to Simplify Belgian Bureaucracy
Say hi to Babel: the result of our 24 hours of Generative AI event! Learn more about our product in this 2-legged blog post:
- Part 1: Discover 'Babel': our AI tool that simplifies Belgian Bureaucracy
- Part 2: For the tech geeks: take a look under the hood of Babel
Part 1 - Discover 'Babel': our AI tool that simplifies Belgian Bureaucracy
On May 10th, we embarked on an extraordinary challenge to harness the true potential of Generative AI. A team of 12 talented software engineers and designers dedicated 24 hours to creating a groundbreaking digital product from scratch, aiming to overcome or simplify the intricate digital maze of Belgian bureaucracy.
The team's mission was clear: to simplify the overwhelming complexity of Belgian administrative procedures. After an intense 24-hour sprint, they proudly unveiled 'Babel' – an online tool designed to guide individuals towards the precise administrative information they need.
With 'Babel,' citizens gain access to a single platform that streamlines their administrative tasks and offers prompt, accurate responses to their questions.
'Babel' is a potential game-changer. Users can effortlessly input their queries, and the power of Generative AI allows the tool to extract the right answers from a vast database, meticulously curated from over 1,800 Belgian government webpages. With 'Babel,' citizens gain access to a single platform that streamlines their administrative tasks and offers prompt, accurate responses to their questions.
"This 24-hour journey has been mind-blowing," said Frederik De Bosschere, Tribe Lead at In The Pocket. "We've witnessed the remarkable capabilities of Generative AI in simplifying the convoluted world of Belgian bureaucracy. While 'Babel' is still a work in progress, its potential is awe-inspiring. Just imagine what we could achieve in a month or even a year."
"While 'Babel' is still a work in progress, its potential is awe-inspiring. Just imagine what we could achieve in a month or even a year."
Although ‘Babel’ isn’t ready to go public yet, it shows the true potential and the impact of Generative AI on our work & lives. Generative AI holds a lot of potential and can be used to solve problems with substantially less time and effort. And who knows? Maybe someone from the government might pick up ‘Babel’ and launch it into the open?
Want to try it out for yourself? Go nuts.
Disclaimer:
- Babel is a proof-of-concept by In The Pocket, created in just 24 hours.
- It will contain mistakes and performance issues, so have a look and be mild for any mistakes.
- Feedback, or do you want to explore the possibilities for a similar product? hello@inthepocket.com
Part 2 - A Look Under Babel's Hood
For the tech-loving people out there, let’s pop open the hood of Babel and look at how our backend people got the engine running during the 24-hour timeframe.
In the first hour of our hackathon, the team sat down to discuss the project’s architecture. This helped them divide tasks effectively and ensured seamless integration later on. First of all, they chose Google Cloud as their go-to cloud platform, implementing an event-driven indexing architecture. This simplified data uploading, URL additions, testing, and adjustments without deployment or code changes. To adhere to the principle of Separation of Concerns, they organised data into multiple buckets, triggering Cloud Functions for scraping & indexing file uploads.
Language Model and Loader:
The team used LangChain, a framework for developing applications powered by language models. The already provided loaders proved really useful to speed up development. More specifically in the case of Babel, an HTML loader allowed to divide everything into 1000-token chunks. This was necessary due to LLM not being able to process long instructions. The loader was slightly rewritten as it originally expected file paths, but the team already had the content readily available.
Embeddings and Vector Database:
Besides that, Babel employs OpenAI's text-embedding-ada-002 to create numerical representations of text for relatedness measurements. These embeddings are beneficial for search, clustering, recommendations, anomaly detection, and classification. Elasticsearch was the chosen vector database, known for its reliability.
LLM Interaction and PromptTemplate:
Our 24-hour heroes used Langchain to interact with gpt-3.5-turbo a large Language Model (LLM) from OpenAI. This allowed them to vectorise queried data and search the vector database in Elasticsearch. Chains in LangChain facilitated seamless integration of multiple components, such as user input, formatting with PromptTemplate, and LLM processing.
RetrievalQA Chain and PromptTemplate:
PromptTemplate, the “magic sauce”, fine-tuned the Babel use case. The RetrievalQA chain in LangChain was perfect for question-and-answer applications. To generate follow-up questions, they adopted a specific template that returns an easily parsable list for our application.
REST Endpoint and Backend Streaming Solution:
The REST endpoint, built with FastAPI, facilitated querying and data retrieval for large contexts within OpenAI. To optimise response retrieval, the team implemented a token-by-token streaming solution on the backend. Unfortunately, time constraints prevented front-end integration completion.
Next Steps and Improvements:
Given the limited time, the team focused on establishing a solid foundation. For future improvements, they’re planning to:
- Implement real-time streaming of responses to eliminate waiting time.
- Explore a hosted Language Model (LLM) to enhance efficiency and maintain data privacy.
- Expand indexed pages to provide comprehensive and localized information.
Our tiger team’s hackathon journey has been one of innovation, collaboration, and relentless pursuit of efficiency and user satisfaction. Through strategic architecture discussions, leveraging cutting-edge technologies, and harnessing the power of large language models, they’ve laid a strong foundation for this project.
While there is always room for improvement, they can (and must!) be proud of the accomplishments they’ve achieved within the limited time constraints. Just imagine what this team could do in one month or even one year…