The failure of companies using AI
To listen this article in Bachata Version, play here:
Over the last days, an MIT report—“State of AI in Business 2025”—has been everywhere for one striking line: about 95% of company efforts show no return. The line is loud. The useful part is quieter: this is a focus problem, not a tooling problem. We read the report and several pieces around it (without chat assistants) and added what we see daily with clients. The pattern repeats: pilots flourish; production stalls.
Ricardo Campos
Karen Vinueza
Dr. Samuel West
Music: Miguel Ortigosa (cestlarumba)
2025
This conclusion is certainly eye-catching, but we want to dig deeper. That’s why we took the time to analyze the report carefully and extract what we consider the key elements.
Hundreds of media outlets echoed this publication, highlighting it in this way (some examples):
"95% Organisations Get Zero Return From Using AI Tools, MIT Study Shows" - NDTV
AI: 95% of companies that use it have not increased profits" - Computer Today
"MIT study shatters AI hype: 95% of generative AI projects are failing, sparking tech bubble jitters" - The Economic Times
"MIT Finds 95% of Enterprise AI Pilots Fail to Boost Revenues" - Tech.co
Knowing this, for us (the researchers who are part of neurona), it is much more important to find out the reason for these failures. According to the report, it is due to a problem of focus. We totally agree.
To address this information, we have analyzed this and other articles (without using any AI) and applied our own experience as researchers and consultants. Our understanding as specialists and our knowledge can be useful in better understanding how to tackle these types of challenges.
Something we have been seeing for some time now is that it is becoming increasingly difficult to work with a professional (who works behind a computer) who is not using ChatGPT or Gemini for almost everything. This interaction, when brought into the professional environment, sometimes ends in frustration; at other times, it results in an impossible amount of work, creating more stress and unease. Therefore, initially, AI adoption rates are very high, but the limitation comes when we realize how complicated it is to move from this more popular use to something more advanced. This is something we know well, as we are dedicated to helping companies navigate this path
The challenge of going beyond ChatGPT
Once this implementation of personal tools has been overcome, professionals face the next level: incorporating them into their workflows, those of other departments, and those of the customers and suppliers involved in them.
This integration is complicated, or at least not as intuitive as installing an app and using it. Only the technology and communications industries seem to be taking real advantage of these tools. Others, however, remain on the wrong side of business transformation; most sectors initiate very active pilot activities, but these lack continuity.
This gap between the pilot phase and production is the key point of the report analyzed here. Working with an app such as ChatGPT or Gemini is successful because it is an easy-to-try and very affordable tool, but we soon realize that it is not particularly useful as it has very limited memory and little capacity for customization.
This is when we start to get nervous. Furthermore, if we work extensively with these simple applications, we realize that, at best, they fail a lot; at worst, we don't detect that they are giving us incorrect information, which seriously jeopardizes our project. Let's not forget that ChatGPT, for example, always ends up forgetting the context (it doesn't learn and can't evolve on its own).






Is there a solution to all this?
As the report says, these problems stem more from an incorrect strategy than from a problem with AI applications or models. This is something we at neurona have been advocating for a long time: before integrating a tool (whether AI or not), we have to carefully analyze how the implementation process will work and how it will affect the people involved in it. Workflows are very important and delicate, and there is a lack of investment in contextual learning (any agency, entity, or department must focus on this, or it will soon lose competitiveness). We should not pay so much attention to benchmarks, but rather to actual business results (concludes MIT).
There is widespread fear about how AI will drastically reduce the size of company workforces. The reality is that it seems that what will happen is that spending on outsourced processes will be reduced by up to 50%. At neurona, as external providers, we are aware of this and that is why we never stop learning and understanding how to integrate everything into our daily work. Sometimes you also have to know how to say no to certain implementations, but knowing why and until when.
We are clear about our value: that members of the process can develop internally. As the report states: “it is recommended to work with suppliers that offer customized systems.” Almost five years ago, we started training our own AI models (visual and text) at neurona, using algorithms that had been invented decades ago, so all this is not as new as it seems, nor as unattainable.
Where is the key to this evolution?
In automatization. This is both frightening and exciting at the same time. This automation should not be limited to sales and marketing processes (which currently account for 50% of AI investment budgets). The obstacle lies in the curve of understanding and learning. If we want technology to be “ergonomic” (if you'll pardon the expression) in our processes, do we have to start studying something new from scratch? In something that changes so quickly? What is the real value of our effort? And the usefulness? Yes, we have to study, but with the help of professionals who can guide us towards a more manageable and effective process. This is what the MIT report refers to as the “conversion of the pilot project into an integrated system for our workflow.” These pilot projects (what we call Pop-up Labs) must be worked on in partnership with customers and external suppliers, and do not consist solely of investing in a program.
Tools do not learn, and they do not integrate or adapt to our processes. This is something we have to learn to do on our own and coordinate. Rather than looking for general uses, we need to focus on adapting to these workflows (horizontal and vertical). The MIT team that conducted this study found that companies prefer to wait for their suppliers to update and adapt to AI before taking risks on their own with new applications and startups. This is completely understandable, but at the same time delicate, as it means losing a significant competitive advantage that will be quite difficult to regain later on.
Decentralization
Years ago, this same MIT department published an article called “A perspective on decentralizing AI,” in which they advocated for decentralized AI that is more original, secure, and exclusive. AI that does not allow a few organizations to own AI models, their interfaces, and the data that comprises them. This proposal is quite complex and may even be unrealistic, but it is clear that the trend is moving towards an agentic web where automatic agents make execution decisions according to our instructions. They must know our preferences and those of our customers, as well as establishing verification and incentive mechanisms and making everything more intuitive. And without having to depend on two or three large, very generalist providers.
In summary, this report leaves us with a very important headline:
It is necessary to invest in knowledge. The competitive advantage of using AI is based primarily on the personalization and optimization of workflows.
It is much less appealing to the media, but we believe that this is the case. Customizing existing tools and integrating them into our daily professional lives is the key. Oh, and don't be swayed by flashy demonstrations that end up raising doubts, creating concerns, and establishing environments based on fear.
Bonus
Whenever we work on an article, we like to share how we have used certain tools. That's why, in this Bonus, we show you our workflow using specific applications, and when these have been more or less successful.
When we want to produce a more reliable summary of content, we put different LLMs (language models) into “competition” with each other. Instead of using applications such as ChatGPT or Gemini, we use their models (which can be accessed “through the back door” via API) and connect them to an N8N workflow or comparative language model analysis platforms.
The results are less targeted, there are no algorithms forcing certain outcomes, and we can see firsthand the difference in results between one model and another. It is even advisable to use older models to work with less defined and more open rules (obviously, knowing that we may obtain more “raw” or biased information).
We can use AI to summarize news stories or articles, but we must be careful because applications generate results based on their rules and interests, and this is something we need to be aware of and control. For example, when we asked the Brave browser about the study we analyzed in this article, we found a response based on sources other than the initial report. Instead of mentioning MIT as the source, it refers to the media outlets that have transcribed the news based on this information.
This is dangerous because we are accepting content as true without really knowing who is analyzing or processing it. We must demand reliable sources.
Don't forget that you absolutely must analyze and read the article on your own, and use these tools only to reinforce decisions or seek help in creating structures or analyses.
Generate a soft, calm bachata. Female lead vocal (mezzo), intimate tone. Sing the lyrics exactly as provided—no extra words, no ad-libs, no scatting.
Style & mix: 92–96 BPM, 4/4, key A minor. Nylon guitar + requinto motifs, bass, güira, light bongó, warm pads. Close vocal, gentle plate reverb, light tape grain.
Song structure:
Intro (4 bars)
Verse 1
Chorus (repeat 2×)
Verse 2
Verse 3
Chorus (repeat 2×)
Bridge (spoken–sung, soft)
Chorus (repeat 2×)
Outro (4 bars instrumental fade)
Performance rules: Keep phrasing natural; if lines are long, hold notes or rest—do not change words.
For the bachata song, the process has been somewhat more complex. Here we have worked on the lyrics with LLM tools, which we have subsequently reviewed manually, trying to find a more suitable rhyme. We have used a detailed prompt:
When it comes to generating visuals, we usually work with custom visual models. These models are created by performing specific training exercises, composed of visual datasets that match our identity. Subsequently, to create the resources, we have used applications such as Touch Designer, Comfyui, Stable Diffusion Forge, and Runway. We have taken the original graphics from the MIT report analyzed in this article to create elements inspired by our visual culture and our message. The data graphics are very geometric and very useful for creating elements and patterns.