Ethics in artificial intelligence: transparency and trust as business values

Ethics in artificial intelligence: transparency and trust as business values

September 13, 2025 – 00:00

The expansion of AI opens unpublished opportunities, but also raises urgent dilemmas on its responsible use.

The irruption of intelligence tortificial In, practically, all areas, has undoubtedly generated a great impact on all aspects of our lives. Daily surprising applications of AI technologies arise and, while companies find new spaces to apply them, the impact on everyday life will be even greater. Consequently, as in so many other opportunities, accelerated technological advance also brings questions and scenarios that cannot be ignored. One of the most obvious and, perhaps, less discussed in depth today, is related to ethics and responsible use of AI.

What is ethics? The Royal Spanish Academy defines ethics as “The set of moral norms that govern the person’s behavior in any field of life.” This concept is built through a process of reflection on their own actions and the causes that motivate them, the internalization of certain moral norms and the construction of a “moral conscience” that guides decisions in various contexts. It is easy to understand that ethics is not something that is “injected” or learned, but is dynamic and evolves, based on the experiences of each being and supported by the ability to reflect on one’s own existence.

In philosophy it refers to the QUALIA as the subjective qualities or sensations perceived in personal experiences, which cannot be completely transmitted through objective words or concepts. Current AI models do not have the ability to reflect autonomously about their “experiences” and, as a consequence, they cannot generate a “conscience”. This limitation establishes a scenario where the responsibility of defining the ethical values ​​and principles that govern them definitively fall on the people and/or companies that make them available to their users and customers.

In this sense, transparency in decision making of AI agents is essential to evaluate the ethical and moral impact they can have on society. An unidentified bias in a system that uses AI can have consequences that range from discrimination to a severe impact on the reputation of people or companies, even with legal consequences for those involved. Finally, The lack of an ethical framework that delimits the responsible use of this technology can lead to scenarios where the adoption of these technologies will be negatively impacted by fear or distrust.

“Explanability” acquires a crucial role: if it is not understood how AI systems arrive at their conclusions, unidentified biases can be kept hidden and reproduce over time. This puts the focus on the ethics of those who design, build and put these systems into operation. The biases or intentions present in an AI agent are nothing more than the amplified reflection of the biases or values ​​of the people involved in their development and use. Therefore, the ethics of any system based on AI depends, ultimately, on the commitment and responsibility of those who participate in their creation and application.

Finally we are the humans who decided to use AI to make decisions, and that implies a high degree of responsibility about the results obtained and the implications of these results in their scope.

In November 2021, UNESCO developed the first world rule about the ethics of AI. Its fundamental pillars are four: human rights, encourage peaceful societies, guarantee diversity and inclusion, and foster environmental care and ecosystems. This norm promotes that the ethical and legal responsibility of AI must be attributable to natural or legal persons. Beyond specific legal frameworks that establish essential aspects such as the protection of personal data, Argentina has aligned with UNESCO recommendation about the definitions of responsible and ethical use of AI.

Technology giants worldwide, main drivers of the massive use of AI, promote the responsible use of these technologies, defining operational principles and recommendations about the use of their platforms and services. These definitions imply participation of several areas of companieswhich represent the different perspectives and looks about the daily use of this tool. The more diverse these looks, the more representative the ethical principles that are obtained as a result of the exercise.

The government and the management of new technologies are fundamental to ensure that the adoption of AI develops within an ethical and responsible framework. It is essential to understand that the principles and values ​​that guide it are not the exclusive responsibility of the technical areas; on the contrary, It is convenient to address them through multidisciplinary work that incorporates the perspective of various actors. Beyond compliance with the legal regulations of each sector, it is essential to clearly define the values ​​and operational principles of each organization, so that they are reflected in the systems that use it. As well as in human life, These definitions need to be reviewed and debated periodicallythus guaranteeing a responsible and updated use of available technology.

The development and responsible and ethical use of AI without biases, diverse and transparent, which reflects those values ​​that favor business evolution, but with focus on people, it is a commitment that we must assume and promote as facilitators of new technologies that are no longer aspirational ideas but everyday realities.

AI Strategy Architect at Ingenia.

Source: Ambito

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Posts