Meet Milton: Proekspert’s AI tool
In today’s world of AI, ready-made tools like ChatGPT have taken over workspaces. But as effective as they are, they often fail to meet the strict privacy and data security needs of businesses like ours. That’s why we decided to create Milton—a custom AI tool designed specifically for our work environment.

Milton was built to be secure, efficient, and tailored to our needs, allowing us to enjoy the benefits of AI while meeting our strict customer agreements. But building Milton wasn’t without its challenges or creativity.
To better understand Milton’s creation, we sat down with the key minds behind its development: Raido Lepp, IoT-Cloud Integration and Applications Delivery Lead, and Kristjan Pille, Software Engineer.
The inspiration behind Milton
Tell us about the initial idea and inspiration behind creating Milton
”RL: “My feeling at that time was as follows: ChatGPT suddenly appeared and captured everyone’s attention. However, it came with significant criticism—primarily due to its data management issues, which didn’t align with our customer agreements. And rightly so—the risks of data leakage and uncontrolled use were too high.
Still, it was clear that AI was here to stay, and trying to ignore it or block its use would have been both foolish and counterproductive. Somewhere along the way, Kristjan discovered Azure AI in its beta stage, combined with open-source web solutions. From there, it became a passion project for Kristjan. With a small, selected group, we quickly began using it internally.”
”KP: Using ChatGPT for coding made me think how great it would be to have our own work-friendly chatbot. Privacy concerns with existing tools led me to Raido Lepp, and that’s when I said, “Hey, what if…,” but in the Estonian language of course!”
Milton’s evolution over the years
How long did it take you to build Milton?
”RL: Back in 2016, we toyed with a Slack bot called Milton Tormi Kana, employing a GitHub machine learning algorithm based on neural networks. It trained for about two weeks on powerful hardware, producing amusingly sporadic results. Tõnu Jaansoo treated neural networks as a hobby, and I facilitated its integration into Slack. Though modern Milton shares the name, the 2016 project didn’t directly influence today’s version, but the underlying concept continues to inspire.”
”KP: While Milton’s name has been around for years, its origins warrant a separate blog post—a story best told by Raido, Milton’s godfather. The chatbot has evolved significantly over time, and I’ve merely been part of its ongoing journey. I hope future contributors will continue to elevate Milton to new heights.”
Goals and the development process
What were Milton’s main goals, and how was it developed?
”RL: The main goal was simple—have fun and see what happens. At the same time, the potential need for such a tool was evident from the start. I didn’t personally contribute much. Kristjan likely spent a few weekends bringing Milton to life.”
”KP: The focus was to get Milton up and running quickly and efficiently, using open-source solutions wherever possible. The actual development was minimal—just enough to securely handle sensitive workplace data.”
What programming languages and technologies were used?
”RL: .NET and Azure ecosystem knowledge formed the foundation.”
”KP: We utilized TypeScript, Bicep, and React, hosted on various Azure services. Together, these made Milton efficient and scalable.”
Challenges
What were the biggest challenges you faced during the development?
”KP: One of the biggest challenges was fixing bugs in how conversations were displayed, which took far more effort than expected. Setting up authorization for Milton was also far from enjoyable.”
Can you share any interesting or unexpected problems you faced and how you solved them?
”RL: There weren’t major issues during development, but launching Milton brought its own challenges. We had to consider how to communicate its introduction—both internally and with clients—as well as perform some cost predictions to estimate monthly expenses.”
”KP: There were some strange bugs, as I mentioned earlier, though I don’t recall the details anymore.”
Ensuring reliability
How do you ensure that Milton provides accurate and relevant responses to users?
”RL: Pure magic!”
”KP: Milton uses the Azure OpenAI ChatGPT model, which comes with various adjustable parameters like temperature, Top P, and frequency penalty. I experimented with these settings and calibrated them to what seemed the most promising configuration for Milton.”
What kind of testing and validation processes did you use to ensure Milton’s reliability?
”RL: At first, we tested in a narrower inner circle, then we expanded the test group until we released it to the entire company.”
”KP: Most of the testing and validation has happened—and continues to happen—through user feedback.”
Language preference
Does it matter which language Milton responds in? Is one language preferred?
”RL: It depends on the quality of the global input materials. Estonian seems to be well represented online, but naturally there’s far more content available in English.”
”KP: Languages that are commonly used, such as English, Spanish, French, and German, work well. However, English is the preferred language since Azure Open AI ChatGPT’s training data includes a vast amount of English content, leading to more detailed and accurate responses in English.”
Updates and the future
Do you need to update Milton regularly, and are there any upcoming features?
”RL: Regular updates of dependencies and upgrades to the solution and models are necessary.”
”KP: Occasionally, the Azure Open AI ChatGPT model is updated as OpenAI releases new versions. Otherwise, the solution requires little to no maintenance.”
”RL: MS Teams integration initially seemed interesting, but it doesn’t offer the same flexibility as the web solution. It would be amazing to enable AI-powered searches through personal data sources via Milton. However, AI is becoming omnipresent anyway—it’s only a matter of time, perhaps a few years, before every service includes an “AI agent,” and the focus shifts to aggregating results at a higher level.”
”KP: In the near future, probably not. There are a couple of ideas I’d like to see implemented, but hopefully next year brings more free time to work on them.”
What features or capabilities are you most proud of?
”RL: I’m proud that it’s being used and that both I and everyone around me can be more skilled, efficient, and effective in their work because of it.”
”KP: Since the solution is open source, I can’t take credit for any specific features in Milton.”
How do you see the future of AI chatbots evolving?
”RL: I see chatbots evolving from current generic personas to more task-specific roles, with more integrations across platforms. We might also witness a new wave of innovations, such as Microsoft Office offering AI across all data sources with seamless integrations into Office, Jira, and Confluence ecosystems. AI agents could become deeply embedded into larger comprehensive solutions. Just as quickly as Milton appeared, it could just as quickly fade into history—and that’s likely how it will go.”
”KP: In the near future, chatbots will gain more powerful language models and expanded functionalities. In the distant future, who knows? Perhaps people will opt for brain chips, integrating directly with LLMs or whatever advanced models exist at that time.”
That said, I believe the current reliance on large powerhouse LLMs is just the first iteration. The real breakthrough will come when someone figures out how to make LLMs run as efficiently as the human brain—or somewhere in that ballpark. That’s when true innovation begins.
Milton is already impacting our day-to-day operations, and we’re excited about its potential for growth. If this story has inspired you and you’re interested in exploring custom AI solutions for your organization, don’t hesitate to reach out!
Read more about Custom AI solutions here: