Lily ليلي
Lily ليلي
• Technology can be racially biased, as seen in examples like Google's photos app mislabeling a black couple as gorillas.

• Structural inequalities are reflected in AI systems, and there is a need to address and fix the racial bias embedded in these systems.
Lily ليلي
• Instances of racial bias in technology include unfair dismissals of ethnic minority Uber drivers and flawed medical readings for black patients using pulse oximeters.

• Technochauvinism, the belief that technology is superior and neutral, perpetuates embedded biases that affect black people in various aspects of their lives.
Lily ليلي
• Racial bias can also be present in housing and mortgage algorithms, leading to higher rejection rates for black applicants.

• Proposed solutions include developing software tools to identify and mitigate bias within technologies and implementing tighter regulations on tech companies.
Lily ليلي
• A regulatory sandbox, a software system, is being developed to allow companies to test their algorithms for bias and prevent the release of biased algorithms.

• Concerns exist about tech companies' ability to self-police, and it may be necessary to regulate them to ensure a reduction in racial bias.
Lily ليلي
• Tighter regulation, particularly by government regulatory agencies, is crucial to combat racial bias, and the EU has taken steps towards this with proposed regulations on AI oversight.

• Effective regulation and shared understanding of risk calculation are needed to prevent racism from impacting the digital future.
Lily ليلي
載入新的回覆