Red Flags: what they are and how to use them

When working with a huge procurement dataset, containing tens of thousands of contracts, looking at the fine print of each of them is not an option. But you can use red flags to surface those tenders with suspicious details, and then study those in further detail. A red flag doesn't mean a contract is corrupt or illegal, just a hint that a closer look may reveal irregularities. And the more red flags, the stronger the suspicion, start there.

Like Tolstoy’s families, valid contracts follow the same steps and procedures in a consistent way, designed to ensure transparency, fair access and accountability during the process. But each illegal one is so in its own way. Every phase of the process is subject to anomalies: are the contract specifications fair or were they written with one particular bidder in mind? Is every potential bidder informed of the opportunity in due time, or is one advised to prepare their proposal in advance? Are bids valued with fair and objective criteria, or are qualitative ones used to select the desired proposal unfairly? Are bidders independent, or do they all belong to the same individual or conglomerate, creating a false sense of competition? Do competing bidders agree to split the market, creating a cartel? And so on and so on. Since looking at all the potential irregularities is unfeasible, we need to start our investigation understanding what are the typical corruption mechanisms in the country or region we are working on. Corrupt individuals tend not to share their tactics voluntarily, which makes our life harder. On the other hand, they are sometimes overconfident and not too worried about covering their tracks, feeling safe in the complexities and darkness of bureaucratic procurement processes. When corruption scandals get exposed in media or prosecuted in the courts, we can look at the underlying techniques used, which often highlight weaknesses and dark spots in the compliance and overview mechanisms. A good example of this approach is Mapping high-level corruption risks in Spanish public procurement, published by the 2Corruption Research Center in Budapest, where the author examines high-profile Spanish corruption scandals involving public procurement, such as the Gürtel or Palma Arena cases, and documents the most common “corruption techniques”, such as the abuse of the urgency and ‘negotiated without publicity’ procedures, or deficient motivation for the contract awards. The paper then goes to study which red flags may have alerted of the irregularities, and whether the necessary information is publicly available and with high-enough quality, a prerequisite for automating the red flag analysis. The RECORD project aims to reduce corruption risk in local level public procurement processes. It does so partly through an interactive online tool,, which allows citizens, journalists or even public officials to monitor procurement processes and their implementation. In order to catch fraud risks, applies red flags and highlights those tenders found suspicious. Although risky does not mean corrupt, flagged procurement documents are worth checking. The RECORD team created a set of 100+ risk indicators , collected from previous projects and research, and published it under an open 3 license to serve as a stepping stone for further developers, activists and academics. In order to select the most valuable red flags, the technical feasibility was analysed (i.e. is the required data available? Is it structured or just as part of meeting minutes?), as well as its relevance in the target countries of the project: Hungary, Poland, Romania and Spain.


Whether you are using a procurement site like, or looking at a spreadsheet on your own laptop, there are some basic red flags that can help you get started when looking for potential stories:
  • One company, many contracts. All around the world, procurement regulations share one thing: the bigger the contract, the more controls in the process, in order to ensure open bidding and getting the best value for money. This means that, as exposed by Civio’s investigation described earlier, public bodies can collude with companies to split a contract into smaller pieces that can then be awarded directly without competition or transparency.
  • Short deadlines. If an open tenders allows interested providers only a couple of days to present their proposals, then beware, the tender is open only in its name. Why the rush? It could very well be that the only company that manages to submit a full proposal is the one who had been told in advance, through private channels. It’s at least a sign of bad planning and management on the public body side.
  • Single bidding. It’s in the interest of the public administration to receive a large number of bids for each tender, as increased competition pushes prices down, increasing value for money. Sometimes, however, a tender will receive only one bid. This, in itself, is not always bad: for example, some specialised products have a limited number of suppliers, sometimes even just one. However, a consistent trend of single bidder tenders for one public body, or a large percentage of those being awarded to the same company, could point at bad practices. It would be good to look at the tender specification, for example, and check whether it’s artificially limiting competition by adding clauses that only one particular company can fulfil.
  • Same product, different price. When looking at the prices paid by several administrations in a given period of time, e.g. a year, we can sometimes find that they’re paying different amounts for the same product or service. What’s going on? Is the public body getting a bad deal because it didn’t attract competition? Were they misinformed and unaware of the real market price? Or is there some sort of collusion with the bidder? 
  • First you win, then you amend. A traditional form of corruption and abuse in some countries, which takes places not at the time of the award, like previous ones, but later in the process, when monitoring and compliance are often relaxed. In this scenario, a company low-balls its offer in order to secure the tender, sometimes bidding even below its costs. A few months later, however, it applies for an amendment with additional costs, recouping its losses (and more!). When the final cost of a public work is three times its initial budget, wonder if that wasn’t the plan from the beginning.
  • Look at what isn’t there. Lack of transparency breeds corruption, as the dark spots are abused by interested parties. As we just saw, amendments are often a common tool, since they’re generally not published, or not as consistently and clearly as the originating tenders. In general, look for opacity, i.e. empty cells in your dataset. Is some field lacking consistently? Only for some types of tenders and particular bidders? Is it intentional or a sign of bad data management? Is the information required by law? 


The red flags described above are just the beginning. They can be implemented with some moderate skill using spreadsheets, as long as the data is there (which, as we know, is not always the case). But we can monitor more complex scenarios, if we put some extra effort. For example, a group of companies can collude to share the market among them. Cartel, anyone? In the basic case, they allocate regions or public bodies to each of them and stop bidding on each other’s tenders. This is too obvious, though, and the limited competition would show up in our basic red flags. If they’re more cunning, though, they’ll take turns to bid. Or, even better, they’ll submit overpriced bids to fake a well-functioning market. Uncovering this automatically may require network analysis. And sometimes you need to go beyond your procurement dataset, and cross-reference it with other sources, such as the company register -an essential one- or a list of donors to political parties. Are those competing bidders real, or are they owned by the same person/company, sharing the same office even? Who’s controlling that company that just won a tender? Are there conflicts of interests with the awarding committee? To answer these and other questions, you’ll need the extra datasets.

Author: Eva Belmonte, Madrid-based investigative journalist (Civio, Spain)