Key points to remember
- Poorly prepared tech due diligence can slow down the acquisition and directly impact the value of the technology asset
- Investors assess not only the technology itself but also the team’s ability to master it and present it clearly
- Cybersecurity must cover the entire attack surface, not just production
- Open source introduces technical and legal risks that must be managed
- Reliance on key people or an overly complex tech stack increases operational risk
- A product that is too specific to a single client limits scalability and the ability to scale the model
Arriving unprepared and giving vague answers
Providing vague or marketing-oriented responses instead of demonstrating a clear, structured and documented technical understanding of the technological assets.
Why this is a problem
During a Technical Due Diligence, auditors primarily seek to assess the technical maturity and the team’s understanding of the system. Vague or approximate answers immediately give the impression that the team does not really understand its architecture or technical constraints.
Best practices
- Maintain up-to-date technical documentation (system architecture, technical infrastructure, deployment pipelines, access and data security)
- Be able to explain what is working well, what is currently being improved and the existing limitations.
- Present a structured and credible product roadmap
- Provide technical, factual and precise answers, without a marketing-oriented approach
- Remain transparent and measured, without overpromising or unnecessarily anticipating questions
Limiting cybersecurity to the production environment
Focusing only on production security and neglecting other attack surfaces.
Why this is a problem
During a Technology Due Diligence, investors and their auditors analyse the company’s entire attack surface. This includes subdomains, test environments, exposed endpoints and certain accessible internal tools.
In private equity or M&A transactions, external network scans are frequently carried out to identify access points exposed to the internet, particularly when the company handles sensitive customer data.
Focusing only on production leaves areas unsecured, which can be interpreted as a lack of overall control over cyber risks.
Best practices
- Map all exposed services (subdomains, APIs, staging environments)
- Regularly check external access points
- Secure developers’ workstations, internal tools and access to sensitive databases
- Implement security standards including MFA (multi-factor authentication), strict access management and continuous vulnerability monitoring
Thinking that open source is “free” and risk-free
Using open-source libraries without fully understanding the dependencies, versions and associated licences.
Why this is a problem
Open source lies at the heart of most modern software products, but it creates two major types of risk:
- A cybersecurity risk: a significant proportion of vulnerabilities come from outdated open-source dependencies. During a tech due diligence, auditors check the versions used, known vulnerabilities and critical dependencies.
- A legal and intellectual property risk: open source is not necessarily free or without constraints. Some licenses, particularly copyleft licenses, may impose strict obligations, such as publishing source code or making some parts of the product open source, which may be incompatible with proprietary software.
Best practices
- Maintain a comprehensive inventory of open-source dependencies (SBOM), enabling the identification of all components used in the product
- Clearly identify the associated licences to understand their terms of use and any restrictions
- Establish an approval process for the open-source libraries used
- Implement tools to automatically scan for vulnerabilities and licence restrictions
- Regularly update the open-source dependencies used in the product
Being overly reliant on a few key people
Having an architecture and organisational structure that relies on a handful of indispensable developers who hold a significant portion of the technical knowledge.
Why this is a problem
During an acquisition, investors also assess the resilience of the technical team. If critical knowledge is concentrated in one or two people, operational risk immediately increases, particularly in the event of their departure or unavailability.
Another point of concern is a technology stack that is too fragmented, with numerous languages and frameworks used depending on the product. This accumulation complicates maintenance, reduces team efficiency and slows down recruitment.
Best practices
- Document the architecture and systems to make the platform’s operation understandable without relying on a single person
- Ensure that technical knowledge is shared within the team (code reviews, internal documentation, knowledge transfer between developers)
- Standardise the technology stack to avoid a multiplicity of languages and frameworks across projects
- Limit the number of technologies used by favouring a limited number of backend languages and a consistent frontend
Adapting the product too much for specific clients
Integrating customer-specific code directly into the product, rather than maintaining standardised logic.
Why this is a problem
The product quickly becomes more complex, more fragile and harder to maintain. Every change or update can affect specific developments, increasing the risk of regression.
Ultimately, this limits the scalability of the SaaS model, as the product moves away from a single standard and becomes dependent on specific cases.
Best practices
- Design generic and configurable features, allowing the product to be adapted to client needs without modifying the code for each one
- Use simple mechanisms (configuration, activatable options, add-ons) to address customer-specific requirements without complicating the product
- Maintain a single product, common to all customers
- Avoid creating different versions of the product for each customer, which makes maintainance more complex and increases the risk of errors
Anticipating risks throughout the investment cycle
To avoid these kinds of mistakes, opting for Vaultinum’s Continuous Diligence is a very good idea; this allows you to regularly monitor key technological risks and gradually improve the quality of the technology asset throughout the investment cycle.
This approach effectively prepares the key stages of the transaction, particularly the exit, by implementing Vendor Tech Due Diligence on the target company’s side to identify areas of concern in advance and present the technology clearly to investors.
At the same time, this preparation enables the investor-side tech due diligence to be approached under better conditions, with the expected level of transparency and maturity. By supporting the target company’s management and technical teams throughout the investment cycle, from acquisition to exit, Vaultinum helps to ensure the reliability of the analysis, strengthen the credibility of the deal and optimise the terms of the transaction.
Disclaimer
The opinions, presentations, figures and estimates set forth on the website including in the blog are for informational purposes only and should not be construed as legal advice. For legal advice you should contact a legal professional in your jurisdiction.
The use of any content on this website, including in this blog, for any commercial purposes, including resale, is prohibited, unless permission is first obtained from Vaultinum. Request for permission should state the purpose and the extent of the reproduction. For non-commercial purposes, all material in this publication may be freely quoted or reprinted, but acknowledgement is required, together with a link to this website.
