New
RideScan is now live! 🎉

Giving Humanoid Robots a "Fitbit": Seeking Technological Solutions in the Great Governance Divergence

Written by Yuchen Wang
Msc Science and Technology Graduate, University of Edinburgh as her independent research experience of exploring International regulatory landscape in the humanoid sector

When Tesla's Optimus or Figure AI's robots step out of factories and into our lives, who is responsible for their unknown risks?

While pursuing my MSc in "Science, Technology and Society" (STS) at the University of Edinburgh, I had the privilege of joining RideScan and delving into the accountability dilemmas of global humanoid robot governance. RideScan's vision is intuitive yet profound – to build a "Fitbit" for robots. This is more than just a slogan; it directly addresses the "structural dilemma" in current global humanoid robot governance, urgently calling for innovation at the technological level to break the deadlock.

Part I: Consensus, Divergence, and the "Structural Dilemma"

While China, the US, and the EU all view humanoid robots as a strategic battleground, their governance paths have undergone a "Great Divergence." Bovens' (2007) accountability model framework divides the accountability chain into "Information – Explanation – Sanction." However, based on their respective sociotechnical imaginaries, China, the US, and the EU anchor responsibility within entirely different institutional cores.


While pursuing my MSc in "Science, Technology and Society" (STS) at the University of Edinburgh, I had the privilege of joining RideScan and delving into the accountability dilemmas of global humanoid robot governance. RideScan's vision is intuitive yet profound – to build a "Fitbit" for robots. This is more than just a slogan; it directly addresses the "structural dilemma" in current global humanoid robot governance, urgently calling for innovation at the technological level to break the deadlock.

🇪🇺 The EU: "Preventive Compliance" and "Innovation Screening"

Through the *AI Act*, ethics are "engineered," setting extremely high auditing thresholds prior to market access. This reflects its strategy as a "Normative Power" (Manners, 2002), but leads to "compliance inflation" – companies must spend vast sums to compress dynamic risks into hundreds of pages of static documentation (Veale & Borgesius, 2021). The result? SMEs are systematically excluded, market diversity suffers, creating a "Chilling Effect" (Madiega, 2021).

🇺🇸 The US: "Ex-Post Justice" and the "Remedies Gap"

It adheres to "Permissionless Innovation," relying on markets and tort litigation to price risks after the fact (Calo, 2015). However, the "algorithmic black box" of humanoid robots drastically increases the costs of judicial fact-finding (Discovery) and fractures the chain of liability (Scherer, 2015). Lawsuits become "wars of attrition," where only giants can afford top experts to clarify responsibility, leading to a "stratification" of access to legal remedies.

🇨🇳 China: "Administrative Leadership" and "Cognitive Lag"

Through the "filing system" and quantitative targets, it achieves efficient mobilization and supply chain security (Naughton, 2021). However, using static administrative checklists to cover dynamic, emergent technological risks easily creates "cognitive blind spots." For example, facing semantic logic vulnerability attacks like "Open Sesame" (Jones et al., 2025), review mechanisms relying on established standards often react with delay, exposing the limitations of "explanatory substitution" (Sheehan, 2023).

The Core Paradox:

Countries use their most adept institutional tools to address new risks, but these very tools expose their systemic functional limitations when confronted with the new technological entity (embodied autonomous robots), resulting in "institutional lock-in."

Part II: RideScan: A "Technological Antidote" Transcending the Dilemmas

RideScan is building a "Digital Immune System" independent of traditional paradigms. Through its continuous monitoring platform, it attempts to provide a shared "technological foundation" for the three parallel tracks of Difficult situation.

1. Continuous Telemetry: Transforming "Static Compliance" into "Dynamic Evidence"
A. Targets the Dilemma: Addresses the EU paradigm's inability of costly ex-ante audits to handle dynamic risks.
B. Like a Fitbit monitoring vital signs, RideScan records the robot's full-spectrum raw data (movement, environment, decision states) 24/7. This not only provides a real-time evidence stream for "auditability" without massive manual compilation but can also capture "edge cases" in unstructured environments that standard tests miss, offering real-world feedback for iterating standards.

2. Predictive Alerts: Acting Before "Ex-Post Accountability" and "Cognitive Lag"
A. Targets the Dilemma: Compensates for the slow judicial response in the US paradigm and the blind spots to unknown risks in the Chinese paradigm.
B. By analyzing changes in complex spatiotemporal patterns during abnormal gait or repeated autonomous task execution, the system can issue alerts before major failures or safety incidents occur. This adds a proactive, data-sensing risk buffer layer to "ex-post accountability" and "static checklists," significantly moving the intervention point from "after the fact" to "during" or even "before."

3. Independent Trust Log: Providing an Auditable Causal Chain for the "Black Box"
A. Targets the Dilemma: Addresses the ultimate bottleneck across all three paradigms – accountability ambiguity caused by the algorithmic black box.
B. RideScan provides a third-party-maintained, tamper-resistant "data layer." This objective record offers a consistent, verifiable factual basis for regulatory investigations (EU), judicial evidence collection (US), and administrative responsibility determination (China). It serves as an "anchor point" for rebuilding trust among all parties.

Part III: From Institutional Gaming to Technological Consensus

Transitioning from Computer Science to STS, I deeply understand that the future of humanoid robots requires not only more powerful algorithms but also executable accountability.

The value of RideScan's exploration lies in moving beyond the "choose-one-of-three" paradigm game. It attempts to use a technological architecture to translate the abstract principle of "responsible innovation" into a machine-readable, universally verifiable common language. It does not seek to replace any existing institution but to build a data-based bridge for dialogue between increasingly divergent systems.

Conclusion

Thanks to the RideScan team and my supervisor, Dr. James Stewart, for allowing me to transform academic reflection on the "structural dilemma" of global governance into a vivid practice of constructing "technology-society interaction." The era of humanoid robots needs both visionary rule-makers and engineers quietly building risk immune systems at the code level.

References

Bovens, M. (2007) ‘Analysing and assessing accountability: A conceptual framework’, European Law Journal, 13(4), pp. 447–468.

Calo, R. (2015) ‘Robotics and the lessons of cyberlaw’, California Law Review, 103(3), pp. 513–563.

Jones, E.K. et al. (2025) Adversarial attacks on robotic vision language action models. arXiv.

Madiega, T. (2021) Artificial Intelligence Act. European Parliament Briefing.

Manners, I. (2002) ‘Normative power Europe: a contradiction in terms?’, JCMS: Journal of Common Market Studies, 40(2), pp. 235–258.

Naughton, B. (2021) The rise of China’s industrial policy, 1978 to 2020. Mexico City: Universidad Nacional Autónoma de México.

Scherer, M.U. (2015) ‘Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies’, Harvard Journal of Law & Technology, 29(2), pp. 353–400.

Sheehan, M. (2023) ‘What China’s algorithm registry reveals about AI governance’, Journal of Contemporary China, 32(143), pp. 794–810.

Veale, M. and Borgesius, F.Z. (2021) Demystifying the draft EU artificial intelligence act. arXiv.