The TrustNet Framework system has been proposed, the purpose of which is to strengthen trust in artificial intelligence, reports Humanities and Social Sciences Communications.
Artificial intelligence has significantly simplified and optimized human life. However, the influence of artificial intelligence is not free from negative aspects: security threats, the spread of incorrect information, accidental mishaps. Currently, in order to prevent existing risks, a group of researchers from the USA and France has embarked on developing a special concept.
The TrustNet Framework development system includes three stages:
1. "Problem transformation," when the global task of trust in AI is connected with exact scientific knowledge.
2. "Acquisition of new knowledge," which includes the joint work of scientists and stakeholders. Within this stage, the main elements of trust are combined: reliability, risk, user, areas of application, and field of use.
3. "Transdisciplinary integration," in which the obtained results are evaluated from the point of view of practical and theoretical interests and transferred to society and science.
To create the new system, scientists involved specialists in the fields of engineering, psychology, ethics, sociology, and law.
Read 'Zamin' on Telegram!