top of page
  • Writer's pictureSertis


In the future, we will be living in a society where AI algorithms become a part of our daily lives. AI will work closely with humans and somehow can come to solutions similar to how we do. Therefore, when designing AI ethics should be taken into account when developing and designing AI to make the most of AI technology for humans.

Many academics in different fields of knowledge have brought out the design guidelines of AI to share among AI designers and developers, to be aware of the cyber security of the automation and design of AI with a focus on Human centered AI, to better understand the principle of AI. In this article, I’m going to share some examples in the different issues on the importance of ethics in AI.

​1. Machine behavior

We should understand the function and behavior of AI like we understand the behavior of humans and animals. So that we can realize the affect of using AI to our social, culture, economic and political interactions. The understanding of machine behavior includes 3 levels of concern

1.) The behavior of AI agent alone

2.) The behavior of a group of AI agents and

3.) The behavior of a group of AI with human behavior.

To monitor the behavior of AI agents alone can be viewed within-machines, to see the change of AI agent behavior in different environments, or it can be viewed in between-machines to see whether AI agents act the same or different when it comes to the same environment.

​The interesting question that is still in further discussion, is having AI to be part of life will certainly change human behavior? or are humans the ones who change AI behavior?

​2. Ethically aligned design

Members of IEEE, the world's largest technical professional organization are proceeding a legal framework for Artificial Intelligence, by following these principles:

1. Human rights:

Problem: Human rights impact from AI

Recommended actions to take:

1.) Establish a framework to protect privacy and build public confidence.

2.) The change of existing or forthcoming regulations into policies or some technical consideration to follow.

3.) In the future, AI should not have equal rights to humans and should be under humans’ control

2. Well-being:

Problem: The existing key indicators of growth do not take into account the impact from technology on human well-being.

Recommended actions to take: Pay close attention on human well-being in designing AI and use the existing relevant key indicator as a reference.

3. Accountability:

Problem: how can we ensure that every person involved in the creation of AI algorithm is accountable for considering the system’s impact in the world and responsible for what they do?

Recommended actions to take:

1.) The court should clearly clarify the responsibilities of using and developing AI algorithms.

2.) The designer and developer should consider the diversification of users’ culture norms.

3.) It should be the development of multi-stakeholder ecosystems for the best practices and legislation.

4.) There should be a registration and record for investigating responsible persons in AI systems.

4. Transparency:

Problem: how can we ensure that AI is transparent?

Recommended actions to take: Set up new standards that can be measured and investigate the transparent, so that the system can be evaluated and considered the levels of compliance.

5. Awareness of misuse:

Problem: will we be able to expand the benefits and reduce the risk of using AI technology unethically?

Recommended actions to take:

1.) To share knowledge about ethics and security of using AI in the right way

2.) Expand knowledge

3.) To share knowledge to the government sector, lawyers and general people on the problems, in order to avoid anxiety and confusion over AI technology.

When AI is free to make decisions, autonomy should be designed to follow the norms or values in society where AI is operated. Also, we should be able to explain the way AI works to build confidence in using it. However, when norms and values are involved in the design of AI, it should be able to measure and evaluate under the accepted standard of the users in that society, and a process should be repeated in order to generate a sequence of outcomes.

In 2018, the high-level expert group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence. This followed the publication of the guidelines on which more than 500 comments were received through an open consultation.

According to the Guidelines, trustworthy AI should be:

1. Lawful - respecting all applicable laws and regulations

2. Ethical - respecting ethical principles and values

3. Robust - both from a technical perspective while taking into account its social environment

The direction to develop and use AI technology must rely on ethics when crafting regulation, to ensure that AI is fair to everyone and to realize the importance of ethics, especially to vulnerable groups such as children, the disabled, and underprivileged. Additionally, it should be a further plan to cope with unexpected issues that can possibly affect the society.

Overall, the main guideline of drafting EU and other groups; such as IBM everyday ethics put forward the same requirement as IEEE which focus mainly on humans. According to the main purpose of AI, it will take us to the next level of progress and innovation to make most people better off over the work. But we are now facing the challenging issues of ethics, norms and values to put into developing and using AI. So we should take this into serious consideration.


bottom of page