Technology and Combating Insecurity: What Happens to Our Privacy?

Technology and Combating Insecurity: What Happens to Our Privacy?
February 14, 2024

Currently, Ecuador is experiencing an unprecedented security crisis that has left alarming numbers of direct victims—the so-called “collateral” ones—, widespread fear, and, above all, the longing for better days, where fear does not dictate our lives. As a result of this fear, many who live in Ecuador demand that authorities resort to any possible means to contain and eradicate organized crime. And when they say that, they are, in a way, consenting that the possibility of living in peace is a greater good that justifies the violation of fundamental rights.

As we have said before, privacy is a human right that, in turn, allows other rights—such as freedom of expression, for example—to be exercised. But what happens when technological means are used to monitor the population and the fight against insecurity is used as a pretext?

For a long time, surveillance has been exercised using different methods, mainly physical, to collect information about specific targets and, of course, with certain limitations. But, while those methods are still used today, technology has made it possible to monitor large portions of the population at low cost and with lower risks than in the past. And what is worse, much of the information used to carry out certain types of surveillance and profiling comes from ourselves every time we provide our data to the various platforms that make up our lives today.

Faced with contexts of excessive insecurity, one of the most common responses from authorities is to resort to technology as a tool to contain violence. Facial recognition cameras, artificial intelligence (AI), signal inhibitors are just some of the innovative offerings that are presented as infallible solutions to restore calm in the country. But does it really make sense to give up our rights for offerings that arise in moments of high internal turmoil?

There are multiple testimonies on how technology has not been enough to fight criminal violence. An example of this is the case of London. In 2019, the city ranked third worldwide among the most surveilled cities, with 68.4 cameras per thousand inhabitants. However, the crime rate was 52.5. That is, there were no specific results to determine that surveillance had contributed to reducing crimes committed.

Without going further, let’s analyze the case of Ecuador. The country has the Integrated Security System ECU 911, which has more than 6,500 surveillance cameras and 70,000 security kits installed in buses and taxis. However, despite this, Ecuador is among the 10 most dangerous countries in the world according to the Global Report Against Transnational Organized Crime 2023 (GITOC). In the case of Guayaquil—where there are already 16,000 facial recognition cameras installed—the incidence of homicides, together with the cities of Durán and Samborondón, reaches 35.65% of all homicides that occur in the country, with a rate of 40.8 per one hundred thousand inhabitants according to the Ecuadorian Observatory of Organized Crime. And, despite all the alleged benefits of this type of technology, in Ecuador only 1 out of 10 murders are solved, while the rest go unpunished.

Technology has brought undeniable advances to our lives, but appealing to it as an essential component of the fight against crime is a mistake. Moreover, it is crucial to question the origin of the technologies that are intended to be implemented in the country, not out of chauvinism or nationalism, but because of the ethical and practical implications they entail. For example, AI—mostly trained in countries of the Global North—often does not adequately reflect the diversity and complexity of our local contexts. There have been cases where AI algorithms have perpetuated racial profiling schemes, treating individuals of African descent, indigenous people, or Asians as equals, without distinction, or worse, identifying them as animals or potential criminals, while white individuals do not face this type of problems.

Similarly, the use of AI is proposed for a more effective distribution of law enforcement, under the pretext of containing the rise of crime. However, the question arises: is it really necessary to resort to this type of technology to achieve this purpose? Aren’t we at risk of having historically racialized and impoverished sectors and individuals profiled as suspects, thus perpetuating past injustices?

In Ecuador, concrete examples of this phenomenon can already be observed, where people from impoverished neighborhoods or certain ethnicities are subjected to profiling, torture, and humiliation. These situations have become viral on social networks, generating intense debate. On the one hand, there are those who justify these violations of human rights, while others question the need for such practices and criticize their viralization, taking advantage of the massive reach of digital media today.

We live in increasingly hyper-vigilant societies. Governments and their institutions have vast information about every aspect of our lives, and yet it is clear that crime and violence do nothing but adapt to new conditions—technological or not—to continue their actions. Many arguments can be made in favor of measures that contribute to stopping the excessive violence in which we are forced to live day by day. Moreover, many of these clearly violate human rights and technology, and do not escape falling into the same patterns.

Technology could become an effective ally in eliminating violence if its implementation were guided by political will to address root causes. Brute force, hyper-vigilance, and techno-solutionism are unlikely to solve what years of neglect have caused.