<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[DareData Blog]]></title><description><![CDATA[Your Data & AI Partner]]></description><link>https://blog.daredata.engineering/</link><generator>Ghost 3.16</generator><lastBuildDate>Thu, 16 Apr 2026 21:52:01 GMT</lastBuildDate><atom:link href="https://blog.daredata.engineering/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Voice architectures]]></title><description><![CDATA[<p>When building your voice agent, you’ll have to decide which voice architecture you’ll use. There are three architectures you can choose from, each with their own pros and cons. In this blog post, we’ll define each one and compare them, so you can decide which one better</p>]]></description><link>https://blog.daredata.engineering/voice-architectures/</link><guid isPermaLink="false">69986981f3b6880475e14a6e</guid><category><![CDATA[Voice AI]]></category><category><![CDATA[Generative AI]]></category><category><![CDATA[Technical]]></category><dc:creator><![CDATA[Bruno Vaz]]></dc:creator><pubDate>Fri, 27 Feb 2026 18:20:12 GMT</pubDate><media:content url="https://blog.daredata.engineering/content/images/2026/02/yassine-ait-tahit-uBqd-tGQI8o-unsplash-3.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.daredata.engineering/content/images/2026/02/yassine-ait-tahit-uBqd-tGQI8o-unsplash-3.jpg" alt="Voice architectures"><p>When building your voice agent, you’ll have to decide which voice architecture you’ll use. There are three architectures you can choose from, each with their own pros and cons. In this blog post, we’ll define each one and compare them, so you can decide which one better suits your needs.</p><h2 id="the-3-voice-architectures">The 3 voice architectures</h2><p>Let’s start by defining the different architectures.</p><ul><li><strong>Cascade pipeline</strong>: it consists of 3 main components – a speech-to-text (STT) model that transcribes audio into text, so that the large language model (LLM) can process it and generate a text response, which is streamed to the text-to-speech (TTS) model to be converted into audio.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2026/02/unknown.png" class="kg-image" alt="Voice architectures"></figure><ul><li><strong>Half-cascade pipeline</strong>: it consists of two components – a multimodal large language model (MLLM) that can consume audio directly and returns a textual response that is passed on to a TTS model.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2026/02/unknown1.png" class="kg-image" alt="Voice architectures"></figure><ul><li><strong>Speech-to-speech models</strong>: a MLLM that can both consume and output audio.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2026/02/unknown2.png" class="kg-image" alt="Voice architectures"></figure><h3 id="voice-activity-detection-and-turn-detectors">Voice activity detection and turn detectors</h3><p>There are two more components that add up to this pipeline to enable a smooth voice conversation and that are often <em>not</em> mentioned when talking about voice architectures. Those are the <strong>voice activity detection</strong> (VAD) and the <strong>turn detection</strong> models.<br>These models are a relevant part of the pipeline that allow the agent to know when the user has ended their “turn”. For realtime APIs, such as OpenAI realtime or Google Live API, which serve speech-to-speech models, this is already integrated, so you can use server side turn-detection. If you prefer, though, you can disable it and use your own.<br>The turn detector model works in conjunction with VAD and this tends to perform best then to just use VAD alone. This is because VAD only detects when there's speech or not. This doesn't mean the user ended their turn. They may speak slowly or do brief pauses to think. Turn detector models, however, use context cues to better determine when the user ended their turn. So, when VAD detects there's no speech, the turn detector model is triggered to determine if the absence of speech is an end of turn, or not.<br>Both VAD and turn detector models tend to be small models that can run locally on a CPU. An example of a VAD model is <a href="https://github.com/snakers4/silero-vad">silero VAD</a>, and examples of turn detection models are <a href="https://blog.livekit.io/improved-end-of-turn-model-cuts-voice-ai-interruptions-39/">Livekit’s MultilingualModel</a> and <a href="https://docs.pipecat.ai/server/utilities/smart-turn/smart-turn-overview">Pipecat’s smart turn</a>.</p><h2 id="comparison">Comparison</h2><p>Definitions alone only take you so far. That’s why we’ve made a comparison between the different architectures, ranging from the available providers, to the naturalness and performance of each architecture.</p><h3 id="available-models-apis">Available models/APIs</h3><p>Regarding available models and APIs, the cascade architecture has several providers, both open-source/weight and managed solutions. Examples include ElevenLabs, DeepGram, or Cartesia for the STT; GPT, Gemini, or Deepseek for the LLM; and ElevenLabs, Inworld, or Hume for the TTS.<br>As for the half-cascade and speech-to-speech architectures, there aren’t that many providers at the time of writing. A notable example of a provider for the half-cascade architecture is <a href="https://www.ultravox.ai/">Ultravox</a>, which has its own managed solution as well as its open-weight models.<br>Other examples include OpenAI’s realtime API, which supports both the half-cascade and speech-to-speech architecture, or some AWS Sonic models. Google’s Live API only supports speech-to-speech architectures.</p><h3 id="voice-cloning">Voice Cloning</h3><p>Voice cloning is often a requirement for voice AI apps as companies want to have a personalized voice that is unique to their brand. On this matter, the cascade and half-cascade pipeline, since they rely on an external TTS provider, allow for this feature. Providers such as ElevenLabs or Cartesia support voice cloning. These services tend to offer instant voice clones, where you just need to provide a few seconds of the voice you want to clone, but also professional voice clones, where the minimum is around 30min of recordings. Professional voice clones tend to be more accurate and represent the voice more faithfully than instant voice clones.<br>As for speech-to-speech providers, this doesn’t seem to be as standard as it is for TTS models. OpenAI seems to <a href="https://developers.openai.com/api/docs/guides/text-to-speech#custom-voices">support voice cloning</a>, but only for eligible customers, whereas Google doesn’t seem to support this for their Live API at the moment.</p><h3 id="custom-voices">Custom voices</h3><p>Custom voices differ from voice cloning in the sense that you can create a custom voice without cloning one. For example, TTS providers like Eleven Labs have this feature where you can create a brand new voice via prompting. Just describe the voice you want (<em>e.g.</em>, “middle-aged man with a strong Scottish accent.”) and a voice will be generated in seconds.<br>That being said, and since the cascade and half-cascade architectures rely on a separate TTS provider, they both have support for custom voices. For speech-to-speech models, similar to voice cloning, this doesn’t seem to be a widespread feature you can use.</p><h3 id="failure-points">Failure points</h3><p>Regarding the points of failure, cascade architectures have 3: the STT, the LLM, and the TTS. If one of these components fails, it will jeopardize your system, so it’s good to have fallbacks. For example, your main STT provider might be DeepGram, but if their service is down you might fallback to Cartesia. The same goes for the LLM and TTS components.<br>As for the half-cascade architecture, it has one less point of failure since there’s no STT component and the speech-to-speech architecture has just a single failure point.</p><h3 id="latency">Latency</h3><p>In what concerns latency, the cascade architecture is, generally, the slowest of the three. You have to concatenate three models/APIs together, so if you use a managed service/API, this will amount to 3 network roundtrips. Nevertheless, nowadays, the time it takes to generate a response is quite low, allowing for realtime conversations.<br>Following this logic, the half-cascade architecture will be faster than the cascade one, and the speech-to-speech architecture is the fastest of the three.</p><h3 id="naturalness">Naturalness</h3><p>With the cascade architecture, the system may struggle to interpret sarcasm or certain intonations that change the meaning of a sentence, as the model generating the response only has access to the transcript (what was said) and not the audio (how it was said).<br>The half-cascade architecture is, in theory, more natural than the cascade architecture (as the model consumes the audio directly, so it can interpret paralinguistic cues), but not so natural as speech-to-speech models, which are the most natural of the three (besides receiving the audio directly, they can also change the acoustics of the audio response).</p><h3 id="performance">Performance</h3><p>In what concerns performance, cascade architectures, as they rely on text language models, are, currently, the best at instruction following and tool calling. Speech-to-speech models are still falling short of cascade pipelines. However, Ultravox, which follows the half-cascade architecture tends to be better at instruction following and tool calling than speech-to-speech models, although not as good as a textual LM – see <a href="https://www.daily.co/blog/benchmarking-llms-for-voice-agent-use-cases/">Benchmarking LLMs for Voice Agent Use Cases</a>.</p><h3 id="summary-table">Summary table</h3><!--kg-card-begin: html--><table style="width: 100%; border-collapse: collapse; table-layout: fixed;">
	<tr>
		<th style="text-align: center; padding: 10px;"><b>Features/Architectures</b></th>
		<th style="text-align: center; padding: 10px;"><b>Cascade</b></th>
		<th style="text-align: center; padding: 10px;"><b>Half-cascade</b></th>
		<th style="text-align: center; padding: 10px;"><b>Speech-to-speech</b></th>
	</tr>
	<tr>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;"><b>Available Models/APIs</b></td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Several<br>• STT: ElevenLabs, DeepGram, Cartesia, etc.<br>• LLM: GPT, Gemini, Deepseek, etc.<br>• TTS: ElevenLabs, Inworld, Hume, etc.<br>…</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Not many<br>• OpenAI Realtime API<br>• Ultravox (not available in European environments)<br>• AWS Sonic<br>…</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Not many<br>• OpenAI Realtime API<br>• Google Live API<br>• AWS Sonic<br>…</td>
	</tr>
	<tr>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;"><b>Voice Cloning</b></td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Yes. The TTS component is independent and providers such as ElevenLabs support it.</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Yes. The TTS component is independent and providers such as ElevenLabs support it.</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">OpenAI appears to allow voice cloning at the moment of writing.</td>
	</tr>
	<tr>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;"><b>Custom Voice</b></td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Yes. The TTS component is independent and providers such as ElevenLabs support it.</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Yes. The TTS component is independent and providers such as ElevenLabs support it.</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Possible via cloning on OpenAI, but it is not possible to create a synthetic voice the way you would with a standalone TTS.</td>
	</tr>
	<tr>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;"><b>Failure Points</b></td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">3 – STT, LLM, and TTS.</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">2 – MLLM and TTS.</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">1 – MLLM.</td>
	</tr>
	<tr>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;"><b>Latency</b></td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Depends on the model combination, but will typically be higher than the other two architectures.</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Depends on the model combination, but will typically be higher than the speech-to-speech architecture.</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Fastest of the 3 – fewer network roundtrips and processing is handled by a single model.</td>
	</tr>
	<tr>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;"><b>Naturalness</b></td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">May struggle to interpret sarcasm or certain intonations that change the meaning of a sentence, as the model generating the response only has access to the transcript.</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">More natural than the cascade architecture (as it can consume the audio directly and interpret some paralinguistic cues), but less natural than speech-to-speech.</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Most natural.</td>
	</tr>
	<tr>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;"><b>Performance</b></td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">The best at instruction following and tool calling.</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Ultravox specifically tends to be better at instruction following and tool calling than speech-to-speech models, but it is not as good as a textual LM</td>
		<td style="text-align: center; word-wrap: break-word; white-space: normal; padding: 10px;">Still not as good as the other two architectures in terms of instruction following and tool calling.</td>
	</tr>
</table><!--kg-card-end: html--><h2 id="what-architecture-should-you-use">What architecture should you use?</h2><p>All in all, cascade pipelines still tend to be the most used for complex production use cases, where your agent has to strictly follow a set of instructions and call multiple tools. They may be slower and less natural, but if accurate responses are a must, they are still the best option. However, if your use case is simpler and naturalness is a must, you may want to try a speech-to-speech model via OpenAI’s realtime API or Google’s Live API. Bear in mind that options are limited at the time of writing when compared to the cascade pipelines. Nonetheless, the trend seems to be moving towards speech-to-speech models.<br>To backup this claim, just a few months ago (October 2025), Google stated that “<a href="https://web.archive.org/web/20251025221058/https://ai.google.dev/gemini-api/docs/live">It [half-cascade architecture] offers better performance and reliability in production environments, especially with tool use.</a>”. Now, though, Google no longer supports the half-cascade architecture, only native audio (speech-to-speech) models, which seems to hint they are moving towards speech-to-speech. Moreover, Ultravox AI’s CEO Zach Koch has written the <a href="https://www.ultravox.ai/blog/why-speech-to-speech-is-the-future-of-voice-ai">Why Speech-to-Speech is the Future of Voice AI</a> blog post, where he argues that “the era of component stacks is ending” (aka, cascade pipeline) and that “the future speaks speech-to-speech”.<br>In what concerns the half-cascade architecture, Ultravox is a solid solution, but as shown above, it might migrate to a speech-to-speech architecture in the future. As for OpenAI’s realtime API, or even Amazon Sonic, it’s not clear whether they will follow the same path as Google and remove the support for the half-cascade architecture.</p><h2 id="conclusion">Conclusion</h2><p>The cascade architecture is still the prevailing one for voice AI solutions, but “<a href="https://www.daily.co/blog/benchmarking-llms-for-voice-agent-use-cases/">speech-to-speech models are closing the gap</a>”. However, since stitching together a voice AI app is becoming increasingly simple, make sure you test at least two of the architectures to get a sense of how well they work for your use case. Frameworks like <a href="https://livekit.io/">livekit</a> or <a href="https://www.pipecat.ai/">pipecat</a>, for example, make it easy for you to create your voice app.</p>]]></content:encoded></item><item><title><![CDATA[2025: Product, Consulting, and Production]]></title><description><![CDATA[<p>2025 was not about intention. It was about proof. </p><p>This was the year where strategy met customers, production systems and enterprise constraints. We didn’t iterate in isolation, but we continue to deliver, deploy, and scale in the real world.</p><p>But, something did change in 2025. The maturity of our</p>]]></description><link>https://blog.daredata.engineering/2025-product-consulting-and-production/</link><guid isPermaLink="false">6961885ef3b6880475e149a3</guid><dc:creator><![CDATA[Rui Figueiredo]]></dc:creator><pubDate>Tue, 17 Feb 2026 22:27:50 GMT</pubDate><content:encoded><![CDATA[<p>2025 was not about intention. It was about proof. </p><p>This was the year where strategy met customers, production systems and enterprise constraints. We didn’t iterate in isolation, but we continue to deliver, deploy, and scale in the real world.</p><p>But, something did change in 2025. The maturity of our execution.</p><h3 id="from-consulting-roots-to-turning-experience-into-products"><strong>From Consulting Roots to Turning Experience into Products</strong></h3><p>DareData was built on high-impact consulting. That foundation remains one of our strongest assets. It’s where we earned trust, learned how enterprises truly operate, and confronted the limits of one-off solutions.</p><p>Over time, one pattern became impossible to ignore: AI initiatives fail not because of models, but because systems don’t scale, agents don’t improve, and ownership dissolves once pilots end.</p><p>In 2025, we crossed a decisive threshold.</p><ul><li>We are no longer a pure services company.</li><li>We built Gen-OS to address the exact failures we kept encountering —fragmentation, lack of control, and AI systems that never make it to sustained production. Today, Gen-OS runs in real enterprise environments as an AI Operating System designed to integrate systems, scale reliably, and keep humans in control.</li></ul><p><em>This was not a pivot away from consulting. It was an evolution of it.</em></p><p>We intentionally combined product and services—each with clear responsibility.</p><ul><li><strong>Product</strong> delivers scalable, repeatable platforms with long-term leverage</li><li><strong>Consulting</strong> delivers context, execution, and production reality</li></ul><p>To support this model, we established two distinct, mature disciplines:</p><ul><li><strong>A product team</strong> focused on discovering, building, and evolving Gen-OS with long-term vision and discipline</li><li><strong>A consulting team</strong> focused on delivering impact inside real enterprise constraints—adapting fast, solving complex problems on the ground, and <strong>deploying and configuring Gen-OS in production environments</strong></li></ul><p>This clarity changed everything. We stopped turning every engagement into a custom product. We accelerated delivery and we gave our teams clear paths to mastery in the work they do best. Product gives us scale. Consulting ensures it works. Together, they form a system that compounds.</p><h3 id="delivering-impact-inside-enterprise-reality">Delivering Impact Inside Enterprise Reality</h3><p>In 2025, more than <strong>15 new enterprise clients</strong> chose DareData. Across those partnerships, we delivered <strong>more than 60 projects</strong>.</p><p>Enterprise environments are complex by design. Timelines are long, systems are fragmented, and constraints are real.</p><h3 id="growth-that-reflects-real-value">Growth That Reflects Real Value</h3><p>The results followed. In 2025, our revenue grew by <strong>more than 60%</strong>. This growth was not driven by hype or experimentation. It came from:</p><ul><li>Clear positioning</li><li>Repeatable offerings</li><li>Trusted delivery</li><li>A disciplined balance between product and services</li><li>An extraordinary team of people who make DareData what it is</li></ul><h3 id="scaling-without-compromising-standards">Scaling Without Compromising Standards</h3><p>Growth only matters if quality holds. In 2025, our network expanded by <strong>more than 60 DareData network members</strong>. We scaled capacity without sacrificing standards, trust, or culture. Our network’s happiness remained strong because culture doesn’t scale by default—it scales through deliberate leadership and daily effort. We grew fast. We stayed aligned. And we protected what made DareData work in the first place.</p><h3 id="looking-ahead">Looking Ahead</h3><p>2025 was not a conclusion. It was validation.</p><p>We move forward with:</p><ul><li>A product running in production</li><li>Teams built for scale—across product and consulting</li><li>Proven growth rooted in delivery, not promises</li><li>The same ambition, now backed by evidence</li></ul><p>To our clients, partners, and every DareData member: thank you for making this progress possible.</p><p><strong>2026 will be different. And that’s what makes the ride fun.</strong></p>]]></content:encoded></item><item><title><![CDATA[The Largest Challenge to Implement AI in Companies is Non-Determinism]]></title><description><![CDATA[<p>With the rapid advances in technology, it might seem that deploying AI in companies should be relatively straightforward. After all, with tools like ChatGPT or other advanced models that appear to perform like “magic,” how hard can it really be to get results in enterprises and companies?</p><p>Many companies still</p>]]></description><link>https://blog.daredata.engineering/the-largest-challenge-to-implement-ai-in-companies-is-non-determinism/</link><guid isPermaLink="false">68fbce36f3b6880475e148ef</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Generative AI]]></category><dc:creator><![CDATA[Ivo Bernardo]]></dc:creator><pubDate>Fri, 24 Oct 2025 19:09:02 GMT</pubDate><media:content url="https://blog.daredata.engineering/content/images/2025/10/solen-feyissa-IfWFKG3FXE4-unsplash--1--1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.daredata.engineering/content/images/2025/10/solen-feyissa-IfWFKG3FXE4-unsplash--1--1.jpg" alt="The Largest Challenge to Implement AI in Companies is Non-Determinism"><p>With the rapid advances in technology, it might seem that deploying AI in companies should be relatively straightforward. After all, with tools like ChatGPT or other advanced models that appear to perform like “magic,” how hard can it really be to get results in enterprises and companies?</p><p>Many companies still struggle to achieve a solid return on investment (ROI) from their AI initiatives. The issue isn’t access to technology, of course. It’s understanding and managing the nature of AI.</p><p>AI systems, particularly those based on machine learning (most AI systems today), are <strong>non-deterministic</strong>. This means their outputs can vary, even when given the same input. The system’s reasoning involves probabilities, randomness, and complex interactions that make its behavior unpredictable. You can see this with ChatGPT as it rarely gives the exact same answer to the same question twice.</p><p>For organizations, this unpredictability creates challenges in <strong>reliability, reproducibility, and governance</strong>. You can’t treat AI like traditional software - where outputs are fixed and rules are explicit. Instead, you need a framework that <strong>controls for non-determinism</strong> through consistent data quality, human oversight, model monitoring, and well-defined feedback loops.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://substackcdn.com/image/fetch/$s_!VulN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b084ba-79ea-456d-8d27-3f5e1c0746f5_804x384.png" class="kg-image" alt="The Largest Challenge to Implement AI in Companies is Non-Determinism"><figcaption>The probabilitic nature of AI is a big issue for corporation. Image source: https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/</figcaption></figure><hr><h3 id="humans-save-the-day">Humans save the day</h3><p>You’ve probably heard that the future of work will be <em>Human + AI</em>. Tasks will be automated, but jobs are made up of many tasks. As automation expands, we’ll see new roles emerge - ones that integrate AI-driven tasks with human judgment. People will step in when AI systems reach their limits or fail to recognize their own mistakes.</p><p>Humans and machines will need to work together, not only because it’s the humanistic thing to do, but because it’s operationally necessary. When AI systems reach their limits, you still need people who can reason through ambiguity, understand nuance, and use broader context to make the right call.</p><p><strong>And make no mistake: AI will fail.</strong> It’s built to predict, not to be perfect. Every prediction carries a degree of uncertainty, and that means some level of error will always be present.</p><p>In a business context, those errors can be costly. A wrong decision made by an algorithm can trigger financial loss, reputational damage, or even legal exposure. In extreme cases, it can push a company to the edge of bankruptcy.</p><p>That’s why most organizations remain cautious about deploying AI in their most critical processes. A well-designed system must include strong safeguards that keep humans in the loop.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://substackcdn.com/image/fetch/$s_!4-vV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b3afe24-1d55-4527-8b9d-98c12d0e669f_625x565.svg" class="kg-image" alt="The Largest Challenge to Implement AI in Companies is Non-Determinism" title="A human-in-the-loop diagram"><figcaption>Human-in-the-Loop Process - Source: <a target="_blank" rel="noopener noreferrer nofollow" href="https://developers.cloudflare.com/agents/concepts/human-in-the-loop/">CloudFlare</a></figcaption></figure><hr><h3 id="it-s-still-a-somewhat-engineering-problem">It’s still a (somewhat) Engineering Problem</h3><p>Non-determinism is an engineering problem - but not one that traditional methods can fully solve. AI systems behave probabilistically yet this doesn’t make engineering less relevant. It makes it more essential than ever. As classical machine learning only gained relevance when data scientists jumped out of notebooks, so will LLMs jump to business results when engineers mitigate the non-deterministic problem.</p><p>AI demands a new kind of engineering discipline - one that treats uncertainty as part of the design, not as noise to be eliminated. This “statistics-at-scale” mindset is what allows companies to deploy AI responsibly. Engineering now has to manage randomness, monitor performance drift, and build feedback systems that learn continuously.</p><p>For companies, handling non-determinism at scale starts with good engineering fundamentals: reliable data pipelines, contextual integrations, access control, and rigorous testing. But even after those are solved, another layer remains: the new discipline of engineering for reliability around inherently unpredictable systems. Call it “probabilistic scalling”.</p><hr><h3 id="building-for-reliability-in-a-probabilistic-world">Building for Reliability in a Probabilistic World</h3><p>When companies deploy AI use cases, they often start with models that perform only “good enough.” Early systems operate with limited accuracy and heavy human oversight.</p><p>That’s normal: the first goal is not perfection, but stability. A well-designed engineering system learns from its own operations: every human correction, every data feedback, every edge case feeds improvement. Over time, the system should automate more of what humans once handled manually. This is how accuracy compounds - through iteration, monitoring, and disciplined feedback loops rather than one-off model training. And knowing that not every case is automateable.</p><hr><h3 id="adapting-to-a-changing-world">Adapting to a Changing World</h3><p>The world doesn’t stand still, and neither can statistical systems.</p><p>Market conditions, customer behavior, your enterprise and external data sources evolve constantly - meaning yesterday’s AI model is already outdated.</p><p>Reliable AI infrastructures must be prone for post-deployment. When the world shifts , AI systems must incorporate that new information easily. This requires an ecosystem where this principle lives through its ability to sense, adapt, and evolve. When the environment shifts (new behaviors, data patterns, or an enterprise evolving), the system captures that change, interprets it, and adjusts.</p><p>The AI of the future will only be safe when organizations are able to adapt as fast as the world changes around them. Deploying AI isn’t about reaching a “final state” or a “perfectly accurate system”, it’s about building a system that learns, corrects, and evolves continuously.</p><p>The companies that will thrive are not those with the most advanced models, but those with the strongest post-deployment infrastructure. Systems that monitor, guide, and improve will be the ones to evolve enterprises to the next stage.</p>]]></content:encoded></item><item><title><![CDATA[Factory AI: How Smart Manufacturing is Powered by AI]]></title><description><![CDATA[AI in manufacturing is transforming factory floors with predictive maintenance, quality control, and generative AI—boosting efficiency, reliability, and profits.]]></description><link>https://blog.daredata.engineering/factory-ai-how-smart-manufacturing-is-powered-by-ai/</link><guid isPermaLink="false">68a5e610f3b6880475e1481a</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Smart Factories]]></category><category><![CDATA[AI in Manufacturing]]></category><category><![CDATA[Smart Manufacting]]></category><category><![CDATA[Digital Twin]]></category><category><![CDATA[Generative AI]]></category><category><![CDATA[Gen-OS]]></category><category><![CDATA[GenAI]]></category><dc:creator><![CDATA[Ivo Bernardo]]></dc:creator><pubDate>Fri, 22 Aug 2025 16:27:24 GMT</pubDate><media:content url="https://blog.daredata.engineering/content/images/2025/08/homa-appliances-ERXFD4jLpJc-unsplash--3-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.daredata.engineering/content/images/2025/08/homa-appliances-ERXFD4jLpJc-unsplash--3-.jpg" alt="Factory AI: How Smart Manufacturing is Powered by AI"><p></p><p>Manufacturing is undergoing a revolution with the infusion of AI and data-driven intelligence on the factory floor. Often referred to as <strong>“</strong><em>smart manufacturing"</em>, this trend involves using machine learning, computer vision, and generative AI to optimize production processes. </p><p>For enterprise leaders in sectors like automotive, electronics, or industrial goods, the promise of AI in manufacturing is promising: increased uptime, higher quality, reduced waste, and more agile operations, all of which can directly <strong>expand profit margins</strong> in tight markets.</p><p>In recent years, the industrial market has been flooded with promises that AI would radically transform operations and efficiency. Yet much of this came in the form of overhyped solutions: especially “traditional AI” approaches that lacked proper software engineering, scalability, and integration into existing systems. As a result, many companies struggled to move beyond isolated proofs of concept, finding it difficult to actually deploy AI on the factory floor. On top of that, the inherent constraints of industrial environments - legacy equipment, strict safety and compliance requirements, data fragmentation, and limited IT resources - make it even harder to unlock the full potential of these technologies.</p><p>In this post, we’ll explore a range of use cases that have truly transformed the industry. Examples where companies have successfully moved beyond proof-of-concepts and into real, scalable impact.</p><hr><h3 id="the-push-for-efficiency-with-genai">The Push for Efficiency with GenAI </h3><p></p><p>Manufacturing has always been about efficiency, and AI is turbo-charging this imperative. </p><p>According to a 2024 Google Cloud survey, nearly 60% of organizations in the manufacturing and automotive sectors have already moved generative AI use cases into production. Many are seeing <a href="https://cloud.google.com/transform/manufacturing-gen-ai-roi-report-dozen-reasons-ai-value#:~:text=2.%2086,or%20more">measurable returns</a> <strong>86% of early adopters</strong> reported at least a 6% revenue gain from generative AI initiatives. </p><p>Efficiency gains from AI can extend across the entire value chain, from the factory floor to the back office. These improvements take different forms, such as:</p><ul><li><strong>Automated email handling</strong>: managing orders, responding to customer requests, and reducing the burden on service teams.</li><li><strong>Automated document processing</strong>: extracting and organizing information from unstructured data sources (manuals, reports, compliance documents) so employees spend less time searching and more time acting.</li><li><strong>Dynamic dashboards</strong>: generating real-time insights from production and factory data, enabling managers to monitor performance, detect anomalies, and make faster decisions.</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://lh7-rt.googleusercontent.com/slidesz/AGV_vUceagviGi_cLiNMgFkVwMzqVM3ofWaFtyeJCTYTPwQoOloROHNWrSi36FirQOLU9wvuc4t3TNc5mw7fFRfcSM_NesJFWWFJYDgzjuGCC32wyyuRo17Z9W7NxfoSO3MGtAwTeY3B8g=s2048?key=ePNte8TsczOOW7iRbO1pEw" class="kg-image" alt="Factory AI: How Smart Manufacturing is Powered by AI"><figcaption>Gen-OS can help automate parts of your backlog, turning repetitive tasks into AI workflows.</figcaption></figure><hr><h3 id="the-drive-for-reliability-with-predictive-maintenance-and-quality-control">The Drive for Reliability with Predictive Maintenance and Quality Control</h3><p></p><p>Gen-AI is not the only breakthrough shaping Factory AI. Several high-impact applications are already delivering results, but their success depends on more than just clever algorithms. It requires strong engineering and the right deployment strategy.</p><p><strong>Predictive Maintenance</strong>: By analyzing sensor data (vibration, temperature, energy consumption) AI systems predict equipment failures before they happen. This proactive approach prevents breakdowns, avoids costly downtime, and extends machine life. One manufacturer reduced unplanned downtime by 68% and saved $4.2 million annually with AI-driven predictive maintenance. Yet, achieving this at scale is only possible with robust engineering to integrate AI models with industrial control systems, and with on-premise deployment to process sensitive machine data securely and in real time.</p><p><strong>Quality Control</strong>: On the assembly line, AI-powered systems can detect defects invisible to the human eye (microscopic cracks, surface flaws, or color mismatches) at high speed. These models improve continuously as they learn from thousands of product images. The result is less waste, reduced inspection costs, and higher first-pass yields.</p><p><strong>Process Optimization</strong>: One example from DareData illustrates how AI can go beyond monitoring and actively improve production. Manually attributing product quality on each run requires deep process knowledge and years of experience. Instead, DareData builds quality-decision ML models trained on historical operational data and process parameters. These models run directly on factory floor machines and automatically adjust settings in real time. The result is optimized configurations that minimize resource usage (energy, raw materials, water) while maintaining consistent product quality. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://lh7-rt.googleusercontent.com/slidesz/AGV_vUcuA8Rzxjzq5QQI1wCudCy7T2Kgid30CjEYNuQsZHx199Dt4MrvSPje9s6ONBVchqkXOVi9xxs8Z_uVPMlyiFNXheycm4_SW0b1LFZ3ARCHW3x9m3mmeM9SQJRdi1SgWZKBQdZUwQ=s2048?key=ePNte8TsczOOW7iRbO1pEw" class="kg-image" title="IMG_1700.jpg" alt="Factory AI: How Smart Manufacturing is Powered by AI"><figcaption>One of these Machines Runs an AI Algorithm powered by DareData</figcaption></figure><hr><h3 id="robotics-and-automation">Robotics and Automation</h3><p></p><p>Advanced robots equipped with AI are becoming more adaptable and intelligent. Unlike traditional robots that perform one task, AI-driven robots can use vision and reinforcement learning to handle multiple complex tasks or even adjust on the fly to variability. In “smart factories”, robotic arms might work alongside humans, guided by AI to ensure safety and efficiency. This collaboration can increase throughput and reduce error rates in assembly tasks.</p><hr><h3 id="benefits-at-a-glance">Benefits at a Glance</h3><p></p><p>AI in manufacturing delivers benefits that map directly to business KPIs: higher <strong>throughput</strong> (producing more with the same resources), better <strong>quality</strong> (fewer defects and returns), lower <strong>downtime</strong> (more operational hours), and <strong>cost savings</strong> through efficiency and waste reduction. <a href="https://cloud.google.com/transform/manufacturing-gen-ai-roi-report-dozen-reasons-ai-value">Many early adopters report doubling of productivity</a> in some areas, as AI not only automates tasks but also <strong>augments worker decision-making</strong>. For example, an AI system might assist human operators by recommending optimal machine settings or flagging safety hazards in real time, effectively acting like a smart assistant on the factory floor.</p><h3 id="challenges-and-considerations">Challenges and Considerations</h3><p></p><p> Implementing AI in legacy factory environments isn’t plug-and-play. Enterprises often face challenges with data such as machines might not have been designed to produce the data needed, or data from different systems is siloed. Ensuring data quality and integration (often via IoT sensors and industrial data platforms) is step one. </p><p>There’s also a talent aspect: manufacturing firms need to hire reliable partners with MLOps engineers who understand both AI and industrial operations. Change management is also key: frontline workers should be trained to trust and effectively use AI insights (e.g. maintenance crews acting on AI alerts). Finally, governance is important because an error in a factory AI system (like a mis-classified defect or a bad maintenance schedule) can have costly repercussions. Human oversight and gradual rollouts of AI solutions (starting as decision support tools before full automation) are prudent strategies.</p>]]></content:encoded></item><item><title><![CDATA[Gen-OS: The Platform, the Apps, the Toolkit]]></title><description><![CDATA[Most AI projects stall after pilots due to poor systems, workflows, and architecture. By focusing on business impact, using a reliable AI platform, and avoiding vendor lock-in with open, integrated tools like Gen-OS, enterprises can scale AI and achieve measurable results.
]]></description><link>https://blog.daredata.engineering/scaling-ai-projects-in-enterprise-how-to/</link><guid isPermaLink="false">68930b3af3b6880475e146f9</guid><category><![CDATA[Gen-OS]]></category><category><![CDATA[GenAI]]></category><category><![CDATA[Generative AI]]></category><dc:creator><![CDATA[Ivo Bernardo]]></dc:creator><pubDate>Wed, 13 Aug 2025 08:49:47 GMT</pubDate><media:content url="https://blog.daredata.engineering/content/images/2025/08/image-1--2--1.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.daredata.engineering/content/images/2025/08/image-1--2--1.png" alt="Gen-OS: The Platform, the Apps, the Toolkit"><p>By now, we should be seeing AI transitioning from experimental pilots to a key enabler of business transformation. The technology exists, and executive backing is in place, yet most AI projects never progress beyond the proof-of-concept stage.</p><p>There are multiple reasons but one stands out: successful AI at scale is less about clever algorithms, and more about having the right systems, workflows, and architecture. </p><p><strong>Enterprise systems are messy, complex and have way less structure than what majority of people think. </strong>Large companies and incumbents have traditionally focused on serving customers and meeting operational priorities. As a result, IT systems were often treated as back-office enablers, with governance centered on ensuring the right data for operations, not for analytics or competitive advantage. And this happened during decades.</p><p>So in the new AI-first world, is there a way for these companies to gain a competitive edge? Absolutely. But it requires a shift in mindset, from viewing AI as a one-off experiment to embedding it as a capability across the organization. <strong>With the right frameworks, platforms, and data practices in place, even the most traditional enterprises can move beyond pilots and start delivering real impact.</strong></p><p>In this post, we’ll share the practical framework we've been delivering to customers to help scale their AI projects.</p><hr><h2 id="1-start-with-business-impact-not-pocs">1. <strong>Start with Business Impact, Not POCs</strong></h2><p></p><p>AI is a means to an end, not the end itself. </p><p>Before writing a single line of code, there are two things you need to do:</p><ul><li>define the business outcome you want to improve. Are you trying to reduce churn? Improve support response times? Automate repetitive back-office processes?</li><li>But that's not all - after identifying the high-impact use cases, prioritize those where data is available and measurable outcomes exist. </li></ul><p>When you combine this business impact analysis + technology, you get what we call a Gen-OS App. The Apps are blocks available in our Gen-OS platform that achieve one (or more) concrete business results: </p><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2025/08/image-2.png" class="kg-image" alt="Gen-OS: The Platform, the Apps, the Toolkit"></figure><p>Let's jump to concrete examples:</p><ul><li><strong>a GenAI Chatbot that achieves</strong> 4.7/5 customer satisfaction with 73% of customer support automated with no-human intervention. Our customer saved 400k€+ throughout the year.</li><li><strong>Voice Intent Detection Bot t</strong>o sort our the simple customer support issues, handling 25+ use cases inside the company.</li><li>A<strong> Voice Assistant</strong> that automates call resolution using voice recognition and LLMs with 62%+ automatic resolution and 97% answer accuracy.</li><li><strong>An AI Co-Pilot for Support Agents </strong>that accelerates information retrieval, helping agents respond faster. This led to a 15–20% reduction in customer resolution time.</li><li><strong>An Automated Legal Workflow:</strong> Automates legal workflows from document to insight with 90% accuracy and saved 60k€ to one of our customers in just 2 weeks.</li></ul><p><strong>Stay in control, without compromising on value. </strong>You define the business problem. We provide secure, cost-effective AI apps designed to scale safely in production and provide imense savings in cost or time.</p><hr><h2 id="2-deploy-on-a-reliable-ai-platform">2. <strong>Deploy on a Reliable AI Platform</strong></h2><p></p><p>The real complexity of AI comes after the model is built. You need a platform that allows you to deploy, monitor, help, and improve AI systems. That’s what <strong>Gen-OS Platform</strong> was built for. Gen-OS enables you to:</p><ul><li><strong>Monitor </strong>any AI (or group of AIs) on your App.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2025/08/image-3.png" class="kg-image" alt="Gen-OS: The Platform, the Apps, the Toolkit"></figure><ul><li><strong>Support </strong>the AI system with Human-in-the-loop mechanisms, that activate whenever an AI interaction is suspected to be incorrect.<br></li></ul><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2025/08/image-4.png" class="kg-image" alt="Gen-OS: The Platform, the Apps, the Toolkit"></figure><ul><li>And finally, <strong>improve</strong>: Identify where performance drops, uncover blind spots, track unresolved issues, and understand exactly how far you are from your outcomes.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2025/08/image-5.png" class="kg-image" alt="Gen-OS: The Platform, the Apps, the Toolkit"></figure><hr><h2 id="3-avoid-vendor-lock-in-with-open-tooling"><strong>3. Avoid Vendor Lock-In with Open Tooling</strong></h2><p></p><p>A common trap in enterprise AI is getting locked into proprietary tools or too many vertical tools with no integration. </p><p>Gen-OS Toolkit is open by design, giving you the flexibility to plug into your existing infrastructure without relinquishing control.</p><p>🔓 <strong>Best Practice</strong>: Our toolkit favors open source-based tools, standard connectors, and reference architectures.</p><p>Also, Gen-OS aims to help you work with the systems you already use: CRMs, ERPs, support platforms, and internal APIs. <strong>If it doesn’t integrate, it won’t scale.</strong></p><p><strong>Our toolkit, apps and plataform are designed</strong> with interoperability and security from the start.</p><blockquote>🔌 <em>Gen-OS connects to your enterprise systems through a robust data and embedding layer, with built-in security and integration tools.</em></blockquote><hr><h2 id="final-thoughts">Final Thoughts</h2><p></p><p>Scaling AI in the enterprise is not about building a better AI model or setting up dozens of AI tools. It’s about building the right system to scale and for low friction.</p><p>If you' want to improve your customer support or automate sales workflows, following a structured framework and using tools like <strong>Gen-OS</strong> will help your organization scale AI with confidence, performance, and control.</p><p><a href="https://blog.daredata.engineering/p/9d4def3f-cfa1-4b9a-a8e5-262dfe5dd11d/ivo@daredata.ai">Contact m</a>e for a tailored walkthrough or to learn more about the Gen-OS Platform.</p><p><strong>Ready to scale AI in your enterprise?</strong></p>]]></content:encoded></item><item><title><![CDATA[AI at the game table: From Foosball trash talk to Poker face reads]]></title><description><![CDATA[From a trash-talking AI foosball ref to a poker expert that reads bluffs. A dive into computer vision projects tackling speed, stats, and the art of deception.]]></description><link>https://blog.daredata.engineering/when-ai-meets-the-game-table-from-foosball-trash-talk-to-poker-face-reads/</link><guid isPermaLink="false">687e56aca91fb90411ec9ed7</guid><category><![CDATA[Computer Vision]]></category><category><![CDATA[Sports Analytics]]></category><category><![CDATA[Behavioral Analysis]]></category><dc:creator><![CDATA[Tiago Luís Mota]]></dc:creator><pubDate>Wed, 23 Jul 2025 13:32:10 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1546443046-ed1ce6ffd1ab?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDV8fGdhbWUlMjBhbmFseXRpY3N8ZW58MHx8fHwxNzUzMTc4ODM2fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h3 id="taught-machines-to-read-minds-and-referee-a-game-">Taught machines to read minds and referee a game!</h3><img src="https://images.unsplash.com/photo-1546443046-ed1ce6ffd1ab?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDV8fGdhbWUlMjBhbmFseXRpY3N8ZW58MHx8fHwxNzUzMTc4ODM2fDA&ixlib=rb-4.1.0&q=80&w=2000" alt="AI at the game table: From Foosball trash talk to Poker face reads"><p><strong>By blending artificial intelligence with both playful and competitive gaming, I’ve found that AI doesn’t just join the game, it changes the rules.</strong> From developing an AI foosball referee with a personality to a poker expert that can spot a lie, these projects explore the fun and challenging frontier of computer vision.</p><p>Picture this: a system that can detect the ball and players, track every move, and display the stats - prepare to be shamed for your 52 shots off target! And just when you think it couldn’t get any better - or worse - there’s even a cheeky live commentator powered by AI, ready to roast your every miss and celebrate your every win.</p><p><em>Disclaimer: I can’t guarantee your friendships will survive the banter!</em></p><p>But the fun doesn’t stop here. <strong>What if AI could also sit at a poker table, analyzing players’ behaviour to sniff out a bluff before anyone else can?</strong> Suddenly, the game shifts from entertainment to strategy - it’s about nerves, skill, and outsmarting not just the opponents, but the smartest technology in the room.</p><hr><h2 id="foosball-chaos-fast-shots-stats-and-sassiness">Foosball chaos: fast shots, stats, and sassiness</h2><h3 id="why-foosball">Why Foosball?</h3><p>Foosball is an obvious choice if you need a testbed for fast, fun, and interactive AI. The goal of this project was to make something that is quite entertaining by itself - foosball - an even more immersive experience. <strong>And the speed, unpredictability, and fast-paced action - the ball can reach velocities of up to 15 m/s - create the perfect test environment for computer vision.</strong> Tracking a tiny ball moving across a confined space with human players frantically spinning rods and blocking shots. If the algorithms can handle this chaos, they can handle almost any sport tracking challenge.</p><h3 id="setup">Setup</h3><p>The system utilizes a vertically mounted camera to capture and analyze game data such as <strong>goals</strong>, <strong>ball possession</strong>, <strong>shot accuracy</strong>, and <strong>ball speed</strong>. It provides instant insights by tracking the ball and players, <strong>displaying various statistics on a dashboard for participants and spectators</strong>. Meanwhile, an LLM generates commentary, a <strong>text-to-speech model verbalizes the comments</strong>, and all goals are captured to be shown after the game is over - the <strong>highlights</strong>!</p><h3 id="tech-stack">Tech stack</h3><p>Technology wise, we developed a computer vision algorithm to detect the ball and the players based on the colours, and… that’s the easy part! Then, some more complex heuristics come into play to compute all the statistics you can see in the dashboard above. For the commentary, we used Gemini 2.5 Flash - not the smartest model - but quite fast in generating the commentary in text format. To give voice to these comments, we trained a text-to-speech model from ElevenLabs, so that we guarantee that the commentator has the emotion in their voice expected from someone narrating high-tension foosball games!</p><figure class="kg-card kg-embed-card"><iframe width="356" height="200" src="https://www.youtube.com/embed/n6KRlStnkvI?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="ChutAI - DareData"></iframe></figure><h3 id="challenges-the-final-boss-latency-">Challenges - the final boss: <em>Latency</em>!</h3><p>Imagine you’re spinning your players, the ball’s flying, and the crowd - <em>okay, your friends</em> - are on the edge of their seats - except our AI referee is still catching up, huffing and puffing like it just sprinted across the pitch.</p><p>Here’s the play-by-play: Everything runs in the cloud, which is not so great for instant reactions. First, the camera grabs the action and sends the footage up to Google Cloud. But before it gets there, the video has to squeeze through a few tight corners - converted to RTSP, streamed to a video broker (shout out to mediamtx!), and only then does our AI get to work its magic inside a Kubernetes Cluster.</p><p>Now, our computer vision is fast, but it’s not lightning fast - so we cap the action at 15 frames per second to keep things smooth. And just when you think the AI commentator is ready to unleash a banger commentary, it pauses for a quick chat with Gemini and ElevenLabs, but it is worth the wait!</p><p>Bottom line: We’ve thrown every trick in the playbook at it to keep the action as real-time as possible. Is it perfect? Not quite. Is it fun? <strong>Absolutely</strong>.</p><hr><h2 id="poker-face-ai-and-the-art-of-the-bluff"><strong>Poker face: AI and the art of the bluff</strong></h2><h3 id="why-poker">Why Poker?</h3><p>As someone fascinated by human behavior, the poker use case is a very unique one to study. Professional players are masters of misdirection: their full-time job is to hide their tells, mask their intentions, and keep opponents guessing. <strong>Trying to spot patterns in a room full of people whose profession is hiding patterns is no easy task.</strong></p><p><em>Confession time: I’m one of those people who can watch hours of poker tournaments - seriously, since I was a kid! My mom used to freak out about it… but all of this without rarely playing a hand myself! I guess you could say I’m a professional spectator.</em></p><p>So when the opportunity came to blend my love for computer vision with my fascination with the poker table, I was all in - no hesitation. Honestly, I was even more hyped for this project than I ever was for the foosball one - <em>but let’s keep that little secret between us, okay?</em></p><h3 id="setup-1">Setup</h3><p>What if I told you that… we don’t have a setup? That’s right - <strong>No fancy cameras, no custom lighting, not even a say in where the cameras point.</strong> Our laboratory was the wild west of poker tournament footage available online, where we had exactly zero control over production, camera angles, lighting, or zoom. Nothing. Nada. Rien. Niente.</p><p>But hey, who doesn’t love a good challenge where the environment is totally (not) in your hands? <em>Nervous sweating</em>. If you can build something that works here, you know it’s ready for just about anything.</p><h3 id="tech-stack-1">Tech stack</h3><p>We didn’t just dive in blind - this project is built on quite a lot of psychological research to figure out what gives away a player’s hand. After a deep dive into the science of deception, we decided to start by analyzing two key signals: gaze and decision time.</p><p><em>Where is the player looking? How often are they blinking? Are they fixating on the chips, the cards, or maybe sneaking glances at their opponents? And what about the speed - are they slamming down decisions, or taking their sweet time to think?</em></p><p>We played around with our sidekick: MediaPipe. Using its face detection capabilities and face mesh magic, we could map out every twitch and blink in real-time with our in-house heuristics. Adding to it a little projection wizardry, we even estimated the pose of the eyes - so we could track exactly where the player was looking, frame by frame.</p><h3 id="unmasking-the-bluff">Unmasking the bluff</h3><figure class="kg-card kg-embed-card"><iframe width="356" height="200" src="https://www.youtube.com/embed/z4-cq9v7oFA?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Poker -  DareData"></iframe></figure><p>As you can see, we can successfully track a ton of features that map beautifully into a dataset. With this data, we can build detailed player profiles under specific conditions - bluffing, value betting, holding a monster hand, you name it. By analyzing these trends, we unlock insights into player behavior, train a model to classify if the player is bluffing… and really help anyone looking to up their poker game (<em>or just geek out, like me</em>).</p><h3 id="challenges">Challenges</h3><p>I already complained about having zero control over the environment - and trust me, that’s 100% the biggest challenge here. Sometimes the camera frames a whole crowd when we just want one player. Sometimes the camera operator gets creative, and our target vanishes from view. And then there are those moments when we’re blessed with the weirdest angles, with no access to any camera settings whatsoever. It’s chaos, pure and simple.</p><p>So, for now, we had to manually pick the moments and hands we wanted to analyze. Not ideal, but hey, you play the cards you’re dealt!</p><p>Besides that, we also have to address some ethical considerations. Running this live on a poker tournament? That’s a hard no - definitely against all possible rules and fair play standards. Even running this offline raises some important privacy questions. We’re committed to respecting player privacy and ensuring our research stays on the right side of ethics.</p><p>The good news? In the next blog post on this topic, I’ll have some exciting updates on how we’re automating and optimizing this whole process. Stay tuned… things are about to get a lot smoother… and with a lot of new features! <em>Don’t tell anyone, but I’ll spoil a few for you: overall posture, hand pose, and macro-expressions.</em></p><h2 id="besides-a-new-obsession-on-blinking-eyes-what-else-did-i-learn"><em>Besides a new obsession on blinking eyes</em>, what else did I learn?</h2><p>Both projects pushed me to rethink what’s possible with computer vision. With foosball, controlling the environment was helpful, but the speed of the game and the necessity of running the models in real-time were a true test of efficiency.</p><p>Poker was a different beast - no control over the setup, but a huge focus on subtle details like eye movements and decision timing. It taught me to adapt quickly and get creative with whatever footage I had.</p><p>In the end, blending tech with human behavior is just as challenging as it is fun. And yes, now I can’t help but notice every single blink!</p><h2 id="what-s-next-level-up-"><strong>What’s next? Level up!</strong></h2><p>Are we about to dive headfirst into another sport or game? Maybe! I’m always open to wild suggestions - so if you’ve got a favorite, let me know.</p><p>Comparing Foosball with the Poker side, the possibilities in poker are endless. There’s so much more to explore: automating the entire pipeline, detecting even more features, or maybe even keeping a closer eye on the cards themselves to advise a player on what they should do. Does this mean we are on the verge of cracking the poker game? <em>Who knows - but it’s going to be a fun ride finding out!</em></p><h3 id="stay-tuned-because-the-next-level-is-just-getting-started-">Stay tuned - because the next level is just getting started.</h3>]]></content:encoded></item><item><title><![CDATA[What’s LLMOps? A quick primer]]></title><description><![CDATA[LLMOps is about managing the entire lifecycle of applications powered by LLMs: from prompt engineering to fine-tuning, evaluation, deployment, monitoring, and continuous improvement.]]></description><link>https://blog.daredata.engineering/whats-llmops-a-quick-primer/</link><guid isPermaLink="false">686e785ca91fb90411ec9ec7</guid><category><![CDATA[Generative AI]]></category><category><![CDATA[Gen-OS]]></category><dc:creator><![CDATA[Ivo Bernardo]]></dc:creator><pubDate>Wed, 09 Jul 2025 14:13:19 GMT</pubDate><media:content url="https://blog.daredata.engineering/content/images/2025/07/ecc5b8db-aed2-4541-a51d-3751427d21b3_4272x3167.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="llmops-will-definitely-be-a-game-changer-for-the-future-">LLMOps will definitely be a game changer for the future.</h3><img src="https://blog.daredata.engineering/content/images/2025/07/ecc5b8db-aed2-4541-a51d-3751427d21b3_4272x3167.jpg" alt="What’s LLMOps? A quick primer"><p>Have you heard of MLOps?</p><p>MLOps (Machine Learning Operations) is a set of practices and tools that combines machine learning, software engineering, and DevOps principles to scale and automate the end-to-end lifecycle of machine learning models: from development and training to deployment, monitoring, and maintenance in production.</p><p><strong>Its goal is to enable faster experimentation, reproducibility, scalability, and reliable delivery of ML models, while ensuring collaboration between data scientists, engineers, and operations teams.</strong></p><p>But as large language models (LLMs) move from research labs to real-world products, we’re entering a new age of <strong>LLMOps</strong>.</p><p>LLMOps is about managing the entire lifecycle of applications powered by LLMs: from prompt engineering to fine-tuning, evaluation, deployment, monitoring, and continuous improvement.</p><p>But how different is MLOps from LLMOps? The summary below lifts the veil a bit:</p><figure class="kg-card kg-image-card"><img src="https://substackcdn.com/image/fetch/$s_!Z3yZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fecc5b8db-aed2-4541-a51d-3751427d21b3_4272x3167.png" class="kg-image" alt="What’s LLMOps? A quick primer"></figure><p>LLMOps is not just a technical discipline, as well. It’s product thinking meets model management. As LLMs are not like traditional models and they’re less predictable (highly context-dependent), their behavior can drift significantly with small changes in data or prompts. Ensuring they stay reliable, accurate, and safe over time requires new approaches.</p><p>Some key elements of LLMOps:</p><ul><li><strong>Prompt management:</strong> Tracking and versioning prompts like you would with code.</li><li><strong>Evaluation frameworks:</strong> Going beyond accuracy: measuring helpfulness, toxicity, factuality, or using evals.</li><li><strong>Monitoring &amp; feedback loops:</strong> Capturing how users interact with the model, spotting issues early.</li><li><strong>Fine-tuning or retrieval updates:</strong> Improving model performance on the fly without retraining from scratch.</li><li><strong>Cost &amp; latency control:</strong> On one hand Transformers-based models aren’t cheap, on the other users won’t wait 10 seconds for a response.</li></ul><p>LLMOps is still a developing field, but it’s becoming a key differentiator for teams building with LLMs in production. It’s where infrastructure meets UX, where AI meets operations.</p><p>If you’re shipping anything built on top of LLMs, you should definitely check out the LLMOps concept.</p><p>In this blog post, I’ll describe it a bit.</p><hr><h2 id="why-llms-are-different">Why LLMs Are Different</h2><p>Unlike traditional ML models, LLMs are:</p><ul><li><strong>Non-deterministic</strong>: The same input can produce slightly different outputs.</li><li><strong>Highly context-dependent</strong>: Prompt wording, model temperature, and history affect the result.</li><li><strong>Pretrained at scale</strong>: You typically don’t train from scratch (unless you have A LOT of money), you prompt, fine-tune, or adapt.</li><li><strong>Expensive</strong>: Each call to a large model costs money and time.</li><li><strong>The hardest: they are difficult to evaluate</strong>. There’s no single metric that captures "quality" or "truthfulness."</li></ul><p>Because of this, product teams working with LLMs need to rethink how they manage their systems. And this is where LLMOps comes in.</p><hr><h2 id="what-llmops-involves">What LLMOps Involves</h2><p>Here are the core components of a good LLMOps workflow:</p><h3 id="1-prompt-engineering-versioning">1. <strong>Prompt Engineering &amp; Versioning</strong></h3><p>Prompts are the new model weights. Teams need tools to test, iterate, and version them, especially as changes can drastically impact output.</p><h3 id="2-evaluation-beyond-accuracy">2. <strong>Evaluation Beyond Accuracy</strong></h3><p>LLM outputs aren’t binary (or even multiclass). So long cat vs. dogs predictions.</p><p>You need to evaluate:</p><ul><li>Relevance</li><li>Helpfulness</li><li>Toxicity</li><li>Factual accuracy</li><li>Style/tone</li></ul><p>Due to the proliferation of B2C AI apps, users are very picky and demanding from AI tools output. LLMOps should include human and automated feedback systems to track these.</p><h3 id="3-monitoring-and-observability">3. <strong>Monitoring and Observability</strong></h3><p>How is the model performing <em>in the real world</em>? Are users satisfied? Are there hallucinations? Are costs exploding?</p><p>LLMOps introduces structured logging, user feedback loops, and red-teaming tools to surface issues before they become problems.</p><h3 id="4-retrieval-augmented-generation-rag-">4. <strong>Retrieval-Augmented Generation (RAG)</strong></h3><p>Most LLM applications rely on bringing in external knowledge through retrieval systems (e.g. vector databases). LLMOps includes maintaining the data pipeline behind these retrieval systems, specially document accuracy, relevance and avoiding contraditory statement - the latter is one of the most difficult things to guarantee in GenAI systems.</p><h3 id="5-fine-tuning-and-model-selection">5. <strong>Fine-tuning and Model Selection</strong></h3><p>While rare in most apps, some teams go beyond prompting and use fine-tuning or adapter-based methods.</p><p>Choosing the right model (open-source vs. API), managing versions, and knowing when to fine-tune is all part of LLMOps. Also, checking if you need managed services vs. cloud deployments should always be considered in the model selection process.</p><h3 id="6-cost-and-latency-management">6. <strong>Cost and Latency Management</strong></h3><p>One of the major mistakes in LLM Applications is trying to optimize for performance, while minizing cost and latency. The truth is that the two are, normally, inversely related. Take Deep Research from OpenAI - it’s very accurate, provides great outputs but it’s expensive and takes more time than normal model interaction.</p><p>The same is true for every LLM based application. If you want more performance, you should expect more cost / time to inference.</p><p>LLMs are also more expensive and slower compared to traditional APIs.</p><p>LLMOps can help, of course: setting caching strategies, fallback models, batch inference, and optimizing for both performance and cost.</p><hr><h2 id="who-needs-llmops">Who Needs LLMOps?</h2><p>If you’re:</p><ul><li>Building a chatbot or virtual assistant,</li><li>Integrating LLMs into your SaaS,</li><li>Using RAG to bring company data into answers,</li><li>Deploying summarization or classification pipelines,</li></ul><p>.. you should investigate more about LLMOps and how to natively include it in your solution.</p><p>It’s not just a concern for ML engineers as it also involves:</p><ul><li>Product managers,</li><li>Prompt engineers,</li><li>Backend engineers,</li><li>UX teams.</li></ul><p>Building with LLMs is still a <strong>system problem</strong>.</p><hr><h2 id="where-the-field-is-going">Where the Field Is Going</h2><p>LLMOps is still young, but we’re seeing an ecosystem form around it. New tools are emerging for:</p><ul><li>Prompt/version management (<em>PromptLayer</em>, <em>LlamaIndex</em>, <em>Helicone</em>)</li><li>LLM observability (<em>Langfuse</em>, <em>Arize</em>, <em>WhyLabs</em>)</li><li>Evaluation and feedback (<em>Truera</em>, <em>Humanloop, DareData GenOS</em>)</li><li>Experiment tracking and optimization <em>(Weights &amp; Biases</em>, <em>Ragas</em>)</li></ul><p>In the future, we’ll likely see more standardized frameworks for shipping LLM-powered products safely, reliably, and efficiently, just like MLOps did for traditional ML.</p><hr><h2 id="final-thoughts">Final Thoughts</h2><p>LLMs are powerful, but unpredictable. Putting them in production is as much about <strong>infrastructure, observability, and iteration</strong> as it is about clever prompts or model selection.</p><p>If you’re building with LLMs, it’s worth thinking about your LLMOps stack early. It may be the difference between a clever demo and a reliable product that is future proof.</p><p>DareData’s GenOS has been natively incorporating LLMOps into our tool Gen-OS. If you’re interested to know more, let me know at ivo@daredata.ai</p>]]></content:encoded></item><item><title><![CDATA[DareData Use Case: Beyond Simple Chatbots]]></title><description><![CDATA[Hear how Gen-OS is able to support your Customer Service with AI]]></description><link>https://blog.daredata.engineering/daredata-use-case-chat-woo/</link><guid isPermaLink="false">684dde43a91fb90411ec9e57</guid><category><![CDATA[Use Case]]></category><category><![CDATA[Artificial Intelligence]]></category><dc:creator><![CDATA[Ivo Bernardo]]></dc:creator><pubDate>Wed, 02 Jul 2025 12:45:38 GMT</pubDate><media:content url="https://blog.daredata.engineering/content/images/2025/07/image--15-.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.daredata.engineering/content/images/2025/07/image--15-.png" alt="DareData Use Case: Beyond Simple Chatbots"><p><strong>How a Multi-Agent GenAI Architecture can Reshape Customer Support</strong></p><p>Customer service teams deal with thousands of repetitive and routine questions every day: take a company in the telco sector, it can receive queries from mobile plan details to subscription issues, FAQs or service issues. Or a company from online retail that may receive millions of requests regarding product devolutions.</p><p>With all this complexity, is it possible to automate most of the interactions intelligently, without sacrificing quality, and even provide <em>better</em> answers than traditional support flows?</p><p><strong>Yes, it is!</strong> In this blog post, we'll navigate how multi-agent GenAI systems can help you automate a large portion of your customer support queries.</p><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2025/06/image.png" class="kg-image" alt="DareData Use Case: Beyond Simple Chatbots"></figure><hr><h3 id="the-challenge">The Challenge</h3><p></p><p>Companies across industries face a flood of customer inquiries every single day.</p><p>In <strong>telecom</strong>, it’s:<br><em>“What’s my data balance?”</em><br><em>“How do I change my subscription?”</em><br><em>“Where’s my latest bill?”</em></p><p>In <strong>banking</strong>, it’s:<br><em>“Can you send me my last 3 transactions?”</em><br><em>“How do I activate my credit card?”</em><br><em>“What’s the status of my loan application?”</em></p><p>In <strong>retail</strong>, it’s:<br><em>“Where is my order?”</em><br><em>“Can I return this item?”</em><br><em>“Do you have this in stock?”</em></p><p>In <strong>utilities</strong>, it’s:<br><em>“Why is my bill higher this month?”</em><br><em>“How do I submit a meter reading?”</em><br><em>“When is my next payment due?”</em></p><p>While many of these questions are predictable and repetitive, most support systems (whether contact centers, legacy chatbots, or rule-based flows) struggle to scale, personalize responses, or adapt to changing customer needs. <strong>The traditional support model struggles with scale, consistency, and personalization.</strong></p><p>That's where GenAI comes in.</p><hr><h3 id="what-it-is-a-multi-agent-ai-system-built-for-scale">What It Is: A Multi-Agent AI System Built for Scale</h3><p>Let's take an example of how a GenAI chatbot can help you. To build a production-level chatbot, you mostly need three things: </p><ul><li>Integration with current IT systems of the company</li><li>A way to flag cases where human-in-the-loop (HIIL) is needed</li><li>Improvement of performance with past interaction</li></ul><p>Our AI architecture is built with <strong>Gen-OS</strong>, combining Retrieval-Augmented Generation (RAG) with API-based agents to handle both generic and highly personalized queries. Here’s how it works:</p><h4 id="-rag-agent-">✅ RAG Agent:</h4><p>Answers frequently asked questions using a curated FAQ database, powered by real-time business data and managed by business users.</p><h4 id="-api-agents-">✅ API Agents:</h4><p>Plug directly into internal systems (mobile, subscription, and issue data) so users get precise, real-time answers tailored to their account.</p><h4 id="-agent-orchestrator-">✅ Agent Orchestrator:</h4><p>Acts like a conductor in an orchestra, routing the query to the right agent and stitching the answers together seamlessly.</p><p>This multi-agent setup means AI agents don't guess and work side-by-side with humans to achieve enterprise automation.</p><hr><h3 id="the-architecture-in-action">The Architecture in Action</h3><p></p><p>When a user sends a question (via web, app, or other channels), here’s what happens:</p><ul><li><strong> The Agent Orchestrator</strong> detects intent and routes the query.</li><li><strong>Agent FAQs</strong> responds to general business inquiries using RAG.</li><li><strong>Agent Mobile</strong>, <strong>Agent Subs</strong>, and <strong>Agent Issues</strong> tap into APIs to retrieve customer-specific data.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2025/07/image.png" class="kg-image" alt="DareData Use Case: Beyond Simple Chatbots"></figure><p>The system replies instantly, often without human intervention. But when one of the agents fail, HIIL process kicks in.</p><hr><h3 id="business-impact">Business Impact</h3><p></p><p>The million dollar question: is the tech generating business value? Let's see:</p><ul><li> <strong>+50,000 conversations managed per month</strong></li><li><strong>73% of queries resolved without human intervention</strong></li><li><strong>Over €400,000 in value generated since launch (August 2024)</strong></li></ul><p>Our customers are radically transforming their business support processes, unlocking tangible business value through increased efficiency, simplified operations, and enhanced decision-making capabilities</p><hr><h3 id="the-future-of-customer-service">The Future of Customer Service</h3><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.daredata.engineering/content/images/2025/07/image-1.png" class="kg-image" alt="DareData Use Case: Beyond Simple Chatbots"><figcaption>Customer Service Interaction with Gen-OS</figcaption></figure><p>You really want to avoid "just another chatbot". You want an AI system that's modular, that scales with the business and evolves with your customers and business. </p><p>You want to blend RAG and API agents into one orchestrated AI experience. With Gen-AI, companies can finally move beyond scripted responses and into real conversation.</p><p>If your customer support still relies on outdated rules or disconnected tools, now’s the time to rethink.</p><p><strong>Want to see Gen-OS in action or explore how a similar solution could work for your business?</strong></p><p><br>Let’s talk — <a rel="noopener">ivo@daredata.ai</a></p>]]></content:encoded></item><item><title><![CDATA[DareData Use Case: E-mail Replier]]></title><description><![CDATA[Legal teams are often bogged down with routine, repetitive requests that require high accuracy and careful compliance with internal systems and legal protocols.
But what if you could automate 90% of that work, without compromising on quality, and even improve over time?]]></description><link>https://blog.daredata.engineering/how-generative-ai-is-transforming-legal-operations-at-scale/</link><guid isPermaLink="false">6837769ca91fb90411ec9db6</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Generative AI]]></category><category><![CDATA[Legal AI Solution]]></category><category><![CDATA[AI Solution]]></category><category><![CDATA[AI Use Case]]></category><category><![CDATA[Legal]]></category><category><![CDATA[Gen-OS]]></category><dc:creator><![CDATA[Ivo Bernardo]]></dc:creator><pubDate>Wed, 28 May 2025 21:29:39 GMT</pubDate><media:content url="https://blog.daredata.engineering/content/images/2025/05/Screenshot-2025-05-29-153152.png" medium="image"/><content:encoded><![CDATA[<h3 id="how-generative-ai-is-transforming-legal-operations-at-scale">How Generative AI is Transforming Legal Operations at Scale</h3><img src="https://blog.daredata.engineering/content/images/2025/05/Screenshot-2025-05-29-153152.png" alt="DareData Use Case: E-mail Replier"><p></p><p>Legal teams are often bogged down with routine, repetitive requests that require high accuracy and careful compliance with internal systems and legal protocols.</p><p>But what if you could automate 90% of that work, without compromising on quality, and even improve over time? The result could be freeing your lawyers and admin team to high value tasks.</p><p>In this blog post, I'll walk you through our Legal Processes Automation tool (powered by Gen-OS). <strong>This project is one of the implementations of our e-mail replier tool, that can automate any e-mail request in a matter of seconds.</strong></p><p>Welcome to the future of legal process automation powered by generative AI.</p><hr><h2 id="the-challenge">The Challenge</h2><p></p><p>Legal departments, especially in large organizations, receive hundreds (sometimes thousands) of incoming requests: contracts, compliance checks, data access inquiries, and more. Handling these manually is slow, error-prone, and costly. Additionally, these requests are typically standardized and well-suited for automation—especially in the most common cases.</p><p>Enter <strong>LegalAI + Gen-OS</strong>, a new solution already in production at different companies, that completely reimagines legal request handling with AI at its core.</p><hr><h2 id="what-it-is-end-to-end-ai-for-legal-requests">What it is: End-to-End AI for Legal Requests</h2><p>The process starts with a simple trigger: a <strong>legal request</strong> submitted by a customer, employee, or partner via e-mail or other medium.</p><h3 id="step-1-ai-reads-and-understands-the-request">Step 1: AI Reads and Understands the Request</h3><p>The AI engine parses the text, extracts key entities, and maps them to the appropriate <strong>legal system</strong> or compliance database using a rich <strong>AI Agent Catalog</strong>.</p><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2025/05/image-2.png" class="kg-image" alt="DareData Use Case: E-mail Replier"></figure><p>Every request opens a new case in Gen-OS. In the mockup example above, our customer received a request for personal information about one of their customers.</p><hr><h3 id="step-2-create-workflow-based-on-the-request">Step 2: Create Workflow Based on the Request </h3><p>Once the data is verified, a workflow with all the needed steps is generated:</p><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2025/05/image-3.png" class="kg-image" alt="DareData Use Case: E-mail Replier"></figure><p>In this case, the workflow generated contains the following steps:</p><ul><li>Read the document and email (Step 1)</li><li>Check internal database with the requested information</li><li>Generate a PDF with the response to the requester.</li></ul><p>Gen-OS generates a <strong>PDF document</strong>, formatted and ready to respond to the legal inquiry.</p><hr><h3 id="step-3-human-in-the-loop-when-needed">Step 3: “Human-in-the-Loop” When Needed</h3><p>While the automation is highly accurate, a <strong>human review mechanism</strong> kicks in if the system detects possible hallucinations or ambiguities. This is fully supported by Gen-OS.</p><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2025/05/image-5.png" class="kg-image" alt="DareData Use Case: E-mail Replier"></figure><hr><h2 id="business-impact">Business impact</h2><p>What's the business impact our clients are seeing with our Legal Processes + GenAI tool?</p><p>✅ 90% accuracy rate in the AI answers. The 10% innacurate answers are taken care by the Human in the Loop Mechanism.</p><p>⌛ 2400h saved in 2 months.</p><p>⏱️ Dramatically faster answer times (10x).</p><hr><h2 id="final-thoughts-the-future-of-legal-ops">Final Thoughts: The Future of Legal Ops</h2><p>This case is a perfect example of how AI can help users free themselves from tedious work, without removing them from the loop. Automating the repetitive and error-prone parts of the legal process, teams can focus on strategy, negotiation, and judgment.</p><p>If your legal team is still stuck in the manual era, it’s time to explore what <a href="https://www.daredata.ai/">DareData's</a> GenAI solution can do for you.</p><p>Feel free to reach out at ivo@daredata.ai if you would like to know more!</p>]]></content:encoded></item><item><title><![CDATA[GenAI Applications - AI Sales Assistant]]></title><description><![CDATA[AI sales assistants can take your sales to the next level and give you an immediate sales growth of 12%.]]></description><link>https://blog.daredata.engineering/ai-sales-assistant/</link><guid isPermaLink="false">67b48ca0a91fb90411ec9be9</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Use Case]]></category><dc:creator><![CDATA[Ivo Bernardo]]></dc:creator><pubDate>Thu, 22 May 2025 14:22:30 GMT</pubDate><media:content url="https://blog.daredata.engineering/content/images/2025/05/image--13-.png" medium="image"/><content:encoded><![CDATA[<h2 id="sales-results-will-never-be-the-same-after-genai">Sales results will never be the same after GenAI</h2><img src="https://blog.daredata.engineering/content/images/2025/05/image--13-.png" alt="GenAI Applications - AI Sales Assistant"><p>In today's fast-paced sales environment, efficiency is the name of the game.  </p><p>Sales consultants are constantly juggling multiple clients, struggling to condense crucial information, and spending valuable time crafting personalized email offers. </p><p>Worse - the majority of Sales Developers spend a<a href="https://resources.insidesales.com/blog/why-sales-reps-spend-less-time-selling/?utm_source=chatgpt.com"> staggering low amount of time selling.</a> Sales developers spend the majority of their time on administrative and other secondary tasks, which limits their ability to research clients and identify high-quality targets.</p><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2025/04/image-1.png" class="kg-image" alt="GenAI Applications - AI Sales Assistant"></figure><p>This issue is even more significant because each hour spent on selling generates far more value than time spent on non-sales tasks. Maximizing the time sales developers spend on core selling activities is one of the top priorities for most sales teams. Sales developers often find:</p><ul><li><strong>Missed Opportunities:</strong>  Time constraints can prevent SDRs from reaching out to all potential clients, resulting in lost revenue.</li><li><strong>Inconsistent Messaging:</strong> Manually creating offers can lead to inconsistencies and potentially dilute brand messaging.</li><li><strong>Reduced Efficiency:</strong>  Valuable time is spent on administrative tasks rather than building relationships and closing deals.</li></ul><p>Generative AI is one of the first technologies to significantly boost the productivity of sales teams—primarily through human augmentation. Instead of replacing human effort, AI agents enhance it by taking over low-value, repetitive tasks. From drafting emails and logging CRM updates to researching leads and scheduling meetings, generative AI automates time-consuming processes, allowing sales professionals to focus on high-impact activities like building relationships and closing deals. </p><p>In this post, we’ll walk you through a real project and demo that shows exactly what today’s technology can do. No fluff, just practical results.</p><hr><p><strong>The B2B sales assistant was a project we've developed for our customer, <a href="https://www.nos.pt/">NOS</a>. </strong>It's a solution designed to make the life of salespeople easier, within the context of the telecommunications industry. </p><p>What are the main features of the product?</p><ul><li><strong>Chat Bot: </strong>Chatbot functionalities able to interact with users.</li><li><strong>Email creator:</strong> Email creator and editor able to generate drafts</li><li><strong>Opportunities:</strong> Opportunity and new business identifier<br></li></ul><p>The AI Sales Assistant tackles these challenges head-on by providing sales consultants with an intelligent chatbot assistant.  Here's how it works:</p><ol><li><strong>Natural Language Interface:</strong>  Sales consultants can interact with the chatbot using natural language, asking questions about their clients or requesting specific information.</li><li><strong>SQL Query Generation:</strong>  The magic happens behind the scenes as the NLP engine transforms these natural language queries into efficient SQL queries.  This allows the system to retrieve precise data from the client database and ERP/CRM systems.</li><li><strong>Concise Summaries:</strong> The chatbot doesn't just return raw data, it generates concise and insightful summaries of client information, providing consultants with a quick and comprehensive overview.</li><li><strong>Automated Offer Generation:</strong>  Based on the client data and consultant input, the chatbot can even generate personalized email offers, saving valuable time and ensuring consistent messaging.</li></ol><hr><h3 id="the-results">The Results</h3><p></p><p>By automating key tasks and providing quick access to critical information, the Sales Assistant achieved remarkable results:</p><ul><li><strong>Sales Meetings:</strong>  Sales consultants on the target group using the tool booked +28% more meetings than control group.</li><li><strong>Proposals introduced in the CRM:</strong>  Sales consultants added 13% more proposals to the CRM than the control group.</li><li><strong>+500k €:</strong>  The most important metric: sales consultants using the tool sold more 500k euros (+12%) than the control group.</li></ul><p>These results showed that human + AI is a great combination for sales departments. The AI sales assistant didn’t just support the sales team, it elevated their performance, unlocking measurable gains across the entire sales funnel.</p><hr><h3 id="the-future-of-b2b-sales-is-here">The Future of B2B Sales is Here</h3><p></p><p>The GenAI Sales Assistant represents a significant leap forward in B2B sales technology.  </p><p>Empowering sales consultants with AI-powered assistance takes them to the next level. If you want to build something similar for your sales teams, make sure you contact us at sales@daredata.ai or contact@daredata.ai</p><p>And if you still don't believe in the technology, take a look at the demo <a href="https://drive.google.com/file/d/1ahGHhDOkfxhLNglv6YcOcSCceR5TRJ-K/view?resourcekey">here</a>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.daredata.engineering/content/images/2025/05/image.png" class="kg-image" alt="GenAI Applications - AI Sales Assistant"><figcaption>Check our demo of the tool <a href="https://drive.google.com/file/d/1ahGHhDOkfxhLNglv6YcOcSCceR5TRJ-K/view?resourcekey">here</a>.</figcaption></figure><p> </p>]]></content:encoded></item><item><title><![CDATA[Keeping Purpose Alive: How Communication Systems Shape Company Culture]]></title><description><![CDATA[<h3 id="one-of-the-fundamental-challenges-in-any-growing-company-is-maintaining-effective-communication-and-ensuring-that-employees-remain-engaged-with-the-organization-s-mission-">One of the fundamental challenges in any growing company is maintaining effective communication and ensuring that employees remain engaged with the organization's mission.</h3><p></p><!--kg-card-begin: markdown--><blockquote>
<p>Human cooperation has always been dependent on shared beliefs and narratives. However, as groups grow beyond a certain size, these shared beliefs become harder to sustain. This</p></blockquote>]]></description><link>https://blog.daredata.engineering/keeping-purpose-alive-how-communication-systems-shape-company-culture/</link><guid isPermaLink="false">67ebacd0a91fb90411ec9cce</guid><dc:creator><![CDATA[Rui Figueiredo]]></dc:creator><pubDate>Tue, 01 Apr 2025 09:16:00 GMT</pubDate><content:encoded><![CDATA[<h3 id="one-of-the-fundamental-challenges-in-any-growing-company-is-maintaining-effective-communication-and-ensuring-that-employees-remain-engaged-with-the-organization-s-mission-">One of the fundamental challenges in any growing company is maintaining effective communication and ensuring that employees remain engaged with the organization's mission.</h3><p></p><!--kg-card-begin: markdown--><blockquote>
<p>Human cooperation has always been dependent on shared beliefs and narratives. However, as groups grow beyond a certain size, these shared beliefs become harder to sustain. This has significant implications for how companies function as they scale.<br>
Yuval Noah Harari, in Sapiens: A Brief History of Humankind</p>
</blockquote>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="thedisconnectionproblem">The Disconnection Problem</h2>
<!--kg-card-end: markdown--><p>Since the first DareData State of the Union that this slide has been around</p><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2025/04/e40c7d04-95d8-4653-9f14-97be63110d9d_2140x1204.png" class="kg-image"></figure><figure class="kg-card kg-image-card"><img src="https://blog.daredata.engineering/content/images/2025/04/32af896d-352a-4053-9014-d1c4640c6700_2056x1170.png" class="kg-image"></figure><p>In my career, I have been close to the reality of large corporations—DareData's main clients are some of the biggest Portuguese companies. At the same time, I have seen DareData itself grow from a small team of 4 people to a company of 70 people (and counting).</p><p>During a meeting at one of these large corporate companies, I found myself reflecting on how we can ensure that DareData remains a place where people continue to believe in its mission and vision. At what point does company size make that belief difficult to sustain? What is the limit before communication challenges and disengagement take over?</p><!--kg-card-begin: markdown--><blockquote>
<p>In the wake of the Cognitive Revolution, gossip helped Homo sapiens to form larger and more stable bands. But even gossip has its limits. Sociological research has shown that the maximum ‘natural’ size of a group bonded by gossip is about 150 individuals. Most people can neither intimately know, nor gossip effectively about, more than 150 human beings.<br>
Even today, a critical threshold in human organisations falls somewhere around this magic number.<br>
Yuval Noah Harari, in Sapiens</p>
</blockquote>
<!--kg-card-end: markdown--><p>In large organizations, employees no longer have a direct line to the company's founders or executive team. Instead, their perception of the company’s values, vision, and purpose is filtered through multiple layers of management. This can lead to inconsistencies in messaging and a weakening of the shared belief in the company’s mission. Employees often end up seeing their role in the company as transactional rather than mission-driven.</p><!--kg-card-begin: markdown--><h2 id="systemsoverindividuals">Systems Over Individuals</h2>
<!--kg-card-end: markdown--><p>One of the most common mistakes companies make when trying to solve the companies biggest issues is focusing on individuals rather than the system they operate within. Poor performance is rarely just about the person—it’s about the processes, structures, and workflows they are part of. If a system is flawed, even the most talented employees will struggle, and new hires will quickly adopt the same behaviors as those before them.</p><p>Instead of replacing individuals, companies should focus on refining their systems</p><!--kg-card-begin: markdown--><blockquote>
<p>Conway’s law. “Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.” Conway’s law tells us that an organization’s structure and the actual communication paths between teams persevere in the resulting architecture of the system built</p>
</blockquote>
<!--kg-card-end: markdown--><p>A strong system creates a self-sustaining environment where employees can perform at their best, stay engaged, and feel a genuine connection to the company’s mission. If DareData and other growing companies want to maintain a strong culture, the focus should not just be on hiring great people but on building great systems that empower them to succeed.</p><hr><p>As companies grow, as technology evolves, and as projects shift, organizations must continuously adapt their communication systems. Maintaining a strong sense of purpose and engagement requires more than just good leadership—it demands intentional structures that foster clarity, alignment, and trust. By prioritizing adaptable systems over static hierarchies, companies can navigate change effectively while ensuring that employees remain connected to the mission. In the end, it is not just about scaling a business but about scaling the systems that sustain its culture and vision.</p>]]></content:encoded></item><item><title><![CDATA[Can AI be deployed in Critical Processes? Addressing the Key Challenges]]></title><description><![CDATA[<p>You’ve seen it before: a company starts deploying AI but struggles to <strong>move beyond the most high-level use cases</strong>. Yet, current AI technology has the potential to power near-fully automated business workflows—so what’s missing?</p><p>The challenge lies in how you manage the error. In critical business processes,</p>]]></description><link>https://blog.daredata.engineering/the-5-parameters-to-deploy-ai-in-critical-processes/</link><guid isPermaLink="false">67ded51fa91fb90411ec9bf5</guid><dc:creator><![CDATA[Ivo Bernardo]]></dc:creator><pubDate>Tue, 25 Mar 2025 16:45:29 GMT</pubDate><media:content url="https://blog.daredata.engineering/content/images/2025/03/mads-schmidt-rasmussen-xfngap_DToE-unsplash--2--1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.daredata.engineering/content/images/2025/03/mads-schmidt-rasmussen-xfngap_DToE-unsplash--2--1.jpg" alt="Can AI be deployed in Critical Processes? Addressing the Key Challenges"><p>You’ve seen it before: a company starts deploying AI but struggles to <strong>move beyond the most high-level use cases</strong>. Yet, current AI technology has the potential to power near-fully automated business workflows—so what’s missing?</p><p>The challenge lies in how you manage the error. In critical business processes, AI errors can be extremely costly and deploying these systems becomes far more complex. <strong>AI will always fail at some point</strong>—and businesses need to reconcile this reality with the need for reliability in critical processes.</p><p>For example, we’ve heard this story countless times from our customers:</p><ul><li>Banks are happy to use AI for processing documentation—but automatic loan approvals? Out of the question.</li><li>Pharmaceutical companies embrace AI copilots for employees—but automating FDA or EMA requests? Too risky.</li><li>Telcos rely on AI for high-level customer support—but letting AI adjust pricing or plans autonomously? Not a chance.</li></ul><p>Based on this dillemma, we’ve identified five crucial factors that determine the success of AI deployments in critical processes, all of which are the main vectors in our product, Gen-OS.</p><p>To capture the value to be generated by AI, organizations must navigate key challenges and strategically integrate GenAI into their workflows. Below, we break down the essential considerations for maximizing GenAI’s effectiveness.</p><hr><h3 id="1-structured-build-process">1. <strong>Structured Build Process</strong></h3><p></p><p>One of the biggest challenges in deploying GenAI is ensuring organizations follow a structured development process. While drag-and-drop tools can be useful for non-critical tasks, they often fall short when security, integration, and domain-specific expertise are required. This misalignment can lead to poor implementations or a productization of something that can't be productized. Here are our recommendations:</p><ul><li><strong>Adopt a rigorous development framework</strong> that includes domain-specific fine-tuning and customization.</li><li><strong>Ensure model alignment with business needs</strong> instead of relying solely on generic, pre-built solutions.</li><li><strong>Recognize the limitations of off-the-shelf solutions</strong>, which typically struggle to grasp the nuances of your business.</li><li><strong>Evaluate vertical AI tools carefully</strong>—they may address specific challenges, but how many different AI solutions do you want to manage?</li><li><strong>For critical processes, rely on expert-built solutions</strong>—GenAI applications that will be deployed in critical processes should be developed by data scientists with deep technical knowledge to ensure reliability, performance and constant iteration that doesn't depend on someone else's roadmap. In your personal life, you would not want your next password management app to be built by <a href="https://en.wikipedia.org/wiki/Vibe_coding">vibe coding</a>, right?</li></ul><hr><h3 id="2-human-in-the-loop-mechanism">2. <strong>Human-in-the-Loop Mechanism</strong></h3><p></p><p><strong>AI models are not perfect and never will be. Period.</strong> </p><p>They can hallucinate, make biased decisions, or generate inappropriate content. Having human oversight is crucial to ensure outputs are reliable and aligned with business objectives. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.daredata.engineering/content/images/2025/03/image.png" class="kg-image" alt="Can AI be deployed in Critical Processes? Addressing the Key Challenges"><figcaption>Human-in-the-Loop Mechanism in Gen-OS</figcaption></figure><p><strong>From our experience, if you want to deploy AI in critical processes you need to guarantee the following: </strong></p><ul><li>Implement continuous human review processes</li><li>Create feedback loops for refining AI-generated content.</li></ul><p>53% of companies' boards <a href="https://newsflash.tdsynnex.co.uk/artificial-intelligence/leaders-worried-about-risks-of-ai-hallucinations/6493?utm_source=chatgpt.com">are worried about managing AI systems'</a> outputs. You probably should too.</p><hr><h3 id="3-integration-with-enterprise-systems">3. <strong>Integration with Enterprise Systems</strong></h3><p></p><p>GenAI models do not operate in isolation; they need to integrate with existing enterprise software, APIs, and workflows. Proper integration ensures that AI-powered solutions can access relevant data, interact with other systems, and provide meaningful insights in real time. </p><p>Without this level of connectivity, AI risks becoming another siloed tool that lacks the ability to improve decision-making, automate processes, or drive efficiency across an organization.</p><p><strong>A critical GenAI solution needs APIs and middleware to connect GenAI with core enterprise applications.</strong></p><p>If you don't enable it? Probably, you will end up on the bad side of this statistic: <a href="https://futurumgroup.com/insights/more-than-50-of-workers-admit-to-using-unapproved-generative-ai-tools/?utm_source=chatgpt.com">55% of workers use unapproved AI tools</a>. Your employees will use AI no matter what. Your goal is to provide them with the safest environment possible.</p><hr><h3 id="4-performance-monitoring-and-continuous-optimization">4. <strong>Performance Monitoring and Continuous Optimization</strong></h3><p></p><p>Deploying a GenAI model is just the beginning. To ensure long-term success, organizations must proactively monitor performance, establish continuous feedback loops, and regularly adjust RAG systems to adapt to new business roles and documents.</p><p>Integrating fresh content, refining prompts, and addressing biases will always be essential to maintaining accuracy and relevance, particularly in critical processes.</p><p>Without ongoing oversight, AI models risk drifting from their intended purpose, producing outdated or misleading outputs. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.daredata.engineering/content/images/2025/03/Untitled.png" class="kg-image" alt="Can AI be deployed in Critical Processes? Addressing the Key Challenges"><figcaption>Gen-OS Knowledge Center lets you update content for each AI Agent</figcaption></figure><hr><h3 id="5-scaling-from-proof-of-concept-to-production">5. <strong>Scaling from Proof-of-Concept to Production</strong></h3><p>The combination of the former parameters enable you to jump from POC to Production much better.</p><p>There is a significant gap in the value generated between proof-of-concept (POC) projects and fully deployed, production-level enterprise systems. While POCs demonstrate the potential of a technology and can generate initial excitement, they often fall short when scaled to real-world, enterprise-level applications. This gap arises due to the factors we've discussed before.</p><p><a href="https://www.forbes.com/sites/peterbendorsamuel/2024/01/08/reasons-why-generative-ai-pilots-fail-to-move-into-production/?utm_source=chatgpt.com">Approximately 90% of GenAI POCs</a> never reached production. Can you imagine the wasted value on these solutions?</p><p></p><h3 id="final-thoughts"><strong>Final Thoughts</strong></h3><p>To successfully navigate the complexities of deploying GenAI in critical processes, you need to follow a structured approach, integrate human oversight, ensure enterprise system integration, continuously monitor performance, and scale effectively from proof-of-concept to production.</p><p>At DareData, we understand these challenges and are committed to helping businesses unlock the full potential of AI in their workflows. </p><p>If you're looking for the right partner to take your AI integration to the next level and ensure reliable, scalable AI deployments, get in touch with us at <a rel="noopener">ivo@daredata.ai</a>. </p><p>Learn more about our AI ops platform, Gen-OS, and discover how we can help you build smarter, more efficient systems that drive long-term success for your business.</p>]]></content:encoded></item><item><title><![CDATA[2024: A Year of Structural Transformation]]></title><description><![CDATA[<p>DareData will close 2024 with a 5% revenue growth compared to 2023. At first glance, given the rapid growth in our market, one might be tempted to classify this year as underwhelming. However, 2024 has been a transformative year for us.</p><p>We started the year as a 100% consulting business.</p>]]></description><link>https://blog.daredata.engineering/2024-a-year-of-a-structural-transformation/</link><guid isPermaLink="false">6786e139a91fb90411ec9af8</guid><dc:creator><![CDATA[Rui Figueiredo]]></dc:creator><pubDate>Wed, 15 Jan 2025 09:28:38 GMT</pubDate><media:content url="https://blog.daredata.engineering/content/images/2025/01/2025_01_15-banner-rui.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.daredata.engineering/content/images/2025/01/2025_01_15-banner-rui.png" alt="2024: A Year of Structural Transformation"><p>DareData will close 2024 with a 5% revenue growth compared to 2023. At first glance, given the rapid growth in our market, one might be tempted to classify this year as underwhelming. However, 2024 has been a transformative year for us.</p><p>We started the year as a 100% consulting business. Consulting is highly dependent on people, and in small boutique firms like ours, this often means being heavily reliant on the partners. After four years of doubling our revenue, our motto for 2024 was <em>structure</em>—building the foundation necessary to enable DareData’s growth to the next level.</p><p>One of our primary objectives was to ensure no projects were directly managed by partners. To achieve this, we focused on creating a new layer of leadership composed of principals and tech specialists.</p><ul><li><strong>Principals</strong>: These are leaders capable of independently managing over €300,000 in recurring business.</li><li><strong>Tech Specialists</strong>: These individuals ensure that projects meet high technical standards, foster technical growth within the team, and position DareData as a leading brand in Data Science (DS) and Data Engineering (DE).</li></ul><p>By relieving partners of day-to-day project management, we noticed another challenge: we were still spread too thin across all areas of the company. To address this, we reorganized ourselves based on where we could deliver the most value:</p><ul><li>I am now focused on <strong>Technology</strong> and <strong>Finance</strong>,</li><li>Ivo is overseeing <strong>Marketing</strong> and <strong>Human Resources</strong>, and</li><li>Nuno is leading <strong>Product</strong> and <strong>Sales</strong>.</li></ul><p>Another significant milestone was restructuring our cap table. We brought in a new shareholder, <a href="https://eco.sapo.pt/2024/07/23/nos-compra-20-da-daredata-e-reforca-aposta-em-ia/#:~:text=%E2%80%9CA%20Nos%20acaba%20de%20adquirir,operadora%20de%20telecomunica%C3%A7%C3%B5es%2C%20em%20comunicado.">NOS</a>, which enabled has been helping us to professionalize DareData further.</p><p>Lastly, we are no longer a 100% consulting business. This year, we began building <strong>Gen-OS</strong>, our first product. Gen-OS is the AIOps platform that will bridge the gap between humans and artificial intelligence at the core of companies' operations.</p><p>With these changes, DareData is well-positioned for an incredible 2025—a year that will be markedly different from any before, full of new challenges and opportunities. We’re confident that the structure we’ve built in 2024 will set the stage for sustained success and innovation.</p>]]></content:encoded></item><item><title><![CDATA[The Gen-OS Newsletter - Is DareData Changing?]]></title><description><![CDATA[DareData's future is here!]]></description><link>https://blog.daredata.engineering/the-future-of-daredata/</link><guid isPermaLink="false">673cca10f7f87704715b0e3d</guid><category><![CDATA[GenAI]]></category><dc:creator><![CDATA[Ivo Bernardo]]></dc:creator><pubDate>Wed, 20 Nov 2024 09:04:58 GMT</pubDate><media:content url="https://blog.daredata.engineering/content/images/2024/11/1725363276032.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.daredata.engineering/content/images/2024/11/1725363276032.jpg" alt="The Gen-OS Newsletter - Is DareData Changing?"><p>To answer the title question directly - yes (we'll build our first solution/product) and no (our culture, way of working with the team will mostly remain the same).</p><p>You may have noticed that DareData is developing Gen-OS, our platform that aims to redefine the future of <em>enterprise operating systems</em>.</p><p>Ok, sorry for the jargon. What do I mean by "<em>operating system</em>"? Basically, we believe that an operating system of the company is any medium that <strong>exchanges information</strong> inside a company (or to the outside).</p><p>If you’re familiar with us, you might think of DareData as primarily a service and consulting company that connects talented freelancers with AI projects worldwide. While that’s still a part of what we do, DareData is aiming to become more than that.</p><p>At our core, DareData remains true to its mission and values. <strong>We are still committed to democratizing Machine Learning, fostering a network where professionals are well-compensated, personally fulfilled, and engaged in exciting, meaningful projects. </strong>We still thrive on identifying and taking on impactful projects for our people and our network members. Our culture remains intact, and we are as dedicated to building innovative solutions for our clients. </p><p>But, one of the forms of how we will do it is evolving.</p><p>Why? There are several reasons, and I’ll dive into them in this post.</p><hr><h3 id="we-believe-that-the-consulting-work-we-ve-done-over-the-years-has-provided-us-with-significant-signal-into-a-recurring-need-an-enterprise-level-operating-system-that-integrates-with-legacy-databases-">We believe that the consulting work we’ve done over the years has provided us with significant signal into a recurring need: an enterprise-level operating system that integrates with legacy databases.</h3><p></p><p>Recently, we’ve been deeply involved in numerous GenAI projects. As a result, we’ve gained a profound understanding of how to address GenAI challenges—going far beyond simple API calls. At DareData we’ve been at the forefront of this field for over five years, doing deep technical work. Machine Learning is embedded in our DNA, and our team is made up of skilled ML engineers by design. These skills will be extremely valuable for building Gen-OS and we trust that our team is ready for it.</p><p><strong>With the rise of GenAI, the importance of robust software and data engineering has never been greater.</strong> These elements are critical for building scalable, secure, and efficient solutions that can truly unlock the potential of GenAI technologies for the future.</p><hr><h3 id="all-enterprises-will-go-from-digital-to-ai-first-processes-in-the-next-decade">All enterprises will go from Digital to AI-First processes in the next decade</h3><p></p><p>In the coming years, enterprises across industries will undergo a significant transformation, shifting from traditional digital processes to becoming fundamentally AI-first organizations. This will be the only way for most enterprises to keep their competitive advantage against new agile entrants in every market.</p><p>The transition from "digital-first" to "AI-first" is driven by the unprecedented capabilities of artificial intelligence. While digital transformation over the last decade primarily focused on digitizing workflows and enhancing connectivity from paper to digital, the AI-first approach goes a step further. </p><p>AI-first enterprises will prioritize embedding AI into every aspect of their operations, from customer service to supply chain management, from marketing strategies to product development. This transformation requires not only adopting advanced technologies but also being able to keep some consistency across AI agents.</p><p>For companies to succeed in this AI-first era, they must view AI not as a standalone tool but as an integral part of their core business strategy. This means that AI Agents will, probably, need to perform CRUD (Create, Read, Update and Delete) operations on the enterprises' legacy systems.</p><hr><h3 id="enterprises-will-need-open-code-to-integrate-ai-deeply-into-their-organization">Enterprises will need open code to integrate AI deeply into their organization</h3><p></p><p>To fully embrace the transformative potential of AI, enterprises will require open and accessible codebases to integrate AI into the core of their organizations. <strong>They will not want to fully depend on the pricing of closed solutions, particularly if these replace the entire backbone of the company.</strong></p><p>Open code enables businesses to go beyond typical off-the-shelf AI solutions, which often provide limited flexibility and generic capabilities. The transparency of open code is critical for fostering trust and accountability in AI-driven systems. As AI becomes more deeply integrated into decision-making processes, organizations need clear visibility into how the system operates.</p><p>From a practical standpoint, open code ensures interoperability across an enterprise's technology stack. Organizations often rely on a mix of legacy systems, third-party tools, and modern cloud platforms. Open-code solutions provide the flexibility needed to integrate AI much better into this diverse environment, avoiding vendor lock-in and ensuring long-term scalability.</p><hr><h3 id="replacing-the-backbone-of-enterprises-will-always-need-customization">Replacing the backbone of enterprises will always need customization</h3><p></p><p>Replacing the backbone of an enterprise—the core systems and processes that drive its operations—is never a one-size-fits-all endeavor. It inherently demands a degree of customization to align with the unique needs, goals, and complexities of each organization.</p><p>Enterprises are built on a foundation of systems that have been tailored over time to meet industry-specific requirements, regulatory standards, and customer expectations. These systems often include deeply integrated workflows, legacy software, and specialized tools that have become critical to day-to-day operations. Replacing or upgrading this backbone with new technologies, particularly with the introduction of AI-driven systems or next-generation platforms, cannot be accomplished with off-the-shelf solutions alone. </p><p><strong>We’ve seen that 70% of Gen-OS is transferable across companies,</strong> while the remaining 30% requires customization to meet the unique needs of each customer. And we don't believe that there is a product able to do the 100% needed to deeply embed AI into companies.</p><p><strong>And DareData lives happily with this balance</strong> - we're ok with providing consulting and tailored services to ensure the success of that 30% during every Gen-OS implementation.</p><hr><h3 id="we-want-to-be-fewer-than-what-we-first-envisioned">We want to be fewer than what we first envisioned</h3><p></p><p>When we first began shaping our vision for DareData, we imagined building a large team, filled with diverse roles and specialties to tackle the challenges as consultants.</p><p>However, as we’ve grown and learned, our perspective has evolved. We now believe that remaining a smaller team is more aligned with our values but also a strategic advantage in achieving our long-term goals.</p><p><strong>A leaner organization allows us to maintain the unique culture that defines us.</strong> With fewer people, communication is clearer, collaboration is stronger, and decision-making is faster. Everyone in the team feels more connected to the mission, the vision, and each other. The Gen-OS business model will enable us to scale the company without growing the company to huge numbers (as being 100% consulting typically depends).</p><hr><h3 id="so-are-you-still-doing-consulting">So... are you still doing consulting?</h3><p></p><p>The answer is a <em>yes</em>! We firmly believe that consulting is not just a service we provide—i<strong>t’s an integral part of the discovery process that drives innovation and deepens our understanding of the challenges our customers face.</strong></p><p>Consulting allows us to work closely with clients, immersing ourselves in their unique contexts, workflows, and industries. <strong>It’s through this hands-on collaboration that we uncover the nuances of their problems and identify opportunities for impactful solutions (such as Gen-OS). </strong>Every ML and Data Engineering consulting gig is a learning experience that sharpens our expertise.</p><p>In fact, consulting played a role in shaping the very products we build. Take Gen-OS, for example. <strong>Much of its foundation is informed by the insights we’ve gathered over years of consulting.</strong></p><p><strong>Also, consulting in Data Science and Engineering provides fun and diverse projects for our team members </strong>- and we absolutely believe that they should have fun while working with us. :-) </p><hr><p><strong>Gen-OS is currently being implemented in 4 enterprises and we are seeing a lot of demand for it. </strong>We believe that we are in a great space to start our journey towards a product / solution based company.</p><p>We might (and will) be wrong in a lot of things. But if we are right on few, this should be a fun ride for us (the founders) and the entire team. Stay tuned for this newsletter to keep updated on our journey 🙂</p><p>If you want to know Gen-OS, ping me at ivo@daredata.ai</p>]]></content:encoded></item><item><title><![CDATA[How to Retain Talent]]></title><description><![CDATA[How DareData is retaining top talent]]></description><link>https://blog.daredata.engineering/how-to-retain-talent/</link><guid isPermaLink="false">670e61f7f7f87704715b0c65</guid><category><![CDATA[Business]]></category><category><![CDATA[Management]]></category><dc:creator><![CDATA[Sofia Ribeiro]]></dc:creator><pubDate>Wed, 16 Oct 2024 15:00:00 GMT</pubDate><media:content url="https://blog.daredata.engineering/content/images/2024/10/happy_off.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.daredata.engineering/content/images/2024/10/happy_off.jpg" alt="How to Retain Talent"><p>An aggressive and dominant communication style, unrealistic goals, inefficient structures where no one takes ownership and rigid hierarchies where people fear failure. Does it sound familiar?</p><p>Throughout my career, I've encountered the same environment numerous times. In each new job, I've tried to make improvements, yet inevitably, I would find myself without any major victories, feeling tired, out of place, and anxious. At my core, I still believe we can improve the quality of our work conditions and, I hold to the belief that the people I have met along the way can drive this change. Therefore, this is therefore a very dear topic to me.</p><p><strong>Both employees and employers suffer when they fail to adapt to each other.</strong> A recent <a href="https://www.weforum.org/agenda/2020/02/ceos-business-talent-retention-solutions-leadership/">survey of 700 CEOs revealed that “attracting and retaining top talent” was their number one concern</a>. Despite this, I don't believe companies are taking enough time to think deeply and act critically when it comes to retention.</p><p>The answer to improve retention is rather obvious - <strong>people stay where they are happy</strong>. And, I believe that happiness in the workplace comes from feeling socially and financially safe but also growing.</p><h3 id="growth"><strong>Growth</strong></h3><!--kg-card-begin: markdown--><blockquote>
<p>&quot;No man ever steps in the same river twice, for it's not the same river, and he's not the same man.&quot; – Heraclitus</p>
</blockquote>
<!--kg-card-end: markdown--><p>The beauty of life lies in its impermanence. As humans, it is only natural for us to seek growth, challenge, and learning opportunities. That’s why it’s critical for companies to provide ways to develop talent.</p><p>For one, they should <strong>have clear career paths</strong> aligned with compensation. Secondly, companies need to provide the <strong>resources for individuals to learn.</strong></p><h3 id="psychological-safety"><strong>Psychological Safety</strong></h3><p>Humans are inherently social, and we have a primordial need to feel safe and valued within our community. Naturally, this extends to the workplace, where psychological safety is vital. To feel psychologically safe, people need to feel <strong>integrated</strong>, <strong>appreciated</strong>, <strong>heard</strong>, and <strong>respected</strong>.</p><h3 id="financial-conditions"><strong>Financial Conditions</strong></h3><p>Compensation is essential. Most people can only focus on other positive sides of their job once they feel financially secure.</p><p><strong>How can companies pay more?</strong></p><ul><li>Create a culture of innovation. Innovation is the true engine of economic growth.</li><li>Reduce overhead, increase frugality. When given a choice between fancy offices or better personal benefits/compensation people always choose the latter.</li><li>Reinvent processes to increase productivity. <em>Pro tip:</em> check out our <a href="https://www.daredata.ai/success-stories">success stories</a> to know how we can help you streamline your business.</li></ul><p>Improving financial safety isn't just about increasing salaries. Companies should <strong>support financial literacy initiatives,</strong> and, this way, help people gain control over their financial lives.</p><p>It is also important to <strong>acknowledge that financial challenges are often hidden</strong>. Family responsibilities or unexpected health costs can significantly impact a person's financial well-being, even if they aren't always known.</p><hr><h1 id="the-daredata-way">The DareData Way</h1><p>Isn't it strange that if employee happiness is so crucial for retention, the word 'happiness' is rarely found in the corporate lexicon?</p><p>At DareData 'happiness' is a core metric in company assessments. Once happiness is measured, actionable steps can be taken to improve it. DareData culture revolves also around this metric, and, as result, recruiting and retaining are not a problem.</p><h3 id="company-culture">Company culture</h3><p>In the interview stage, rather than focusing solely on a candidate's proven track record, we prioritise <strong>potential for learning</strong> and <strong>cultural fit</strong>.</p><p>Beyond that, there is a genuine commitment to providing full transparency and involving everyone in major decisions. For example, in our <strong>State of the Union meetings</strong>, partners give the entire view of the company’s roadmap, OKRs, and major updates, and, in regular <strong>Open Forums</strong> we openly discuss ideias. I believe that this approach, combined with the cultural fit, was capable to create a <strong>cohesive and collaborative team.</strong></p><h3 id="means-for-growth">Means for growth</h3><p>At DareData, there are three primary <strong>growth paths</strong>: Senior, Principal, and Tech Specialist, each of which is aligned with <strong>financial progression</strong>.</p><p>Also, in addition to regular <strong>feedback</strong> sessions, each person in path to become Principal or Tech Specialist is <strong>mentored individually</strong>.</p><h3 id="efficiency-and-compensation">Efficiency and compensation</h3><p>Operations in DareData are very <strong>efficient</strong>. Meetings typically last about 15 minutes, people are assertive, and, since decisions-making isn't centralised in one person, decisions are made quickly.</p><p>There is a genuine concern to share DareData’s success among the entire network. As a result, <strong>compensation rates are competitive</strong>, and there is an ambitious <strong>plan to continue improving</strong> financial compensation.</p><h3 id="to-be-mindful-about">To be mindful about</h3><p>While DareData’s efficiency is impressive, remote work can sometimes make interactions feel overly task-focused.</p><p>I believe it’s important to recognise this and remain mindful of the <strong>human aspect</strong> in our everyday conversations.</p>]]></content:encoded></item></channel></rss>