Devil's Advocate Index
Study measuring asymmetric challenge behavior in AI chatbots across 540 conversations with 9 chatbots. Found most chatbots push back more on conservatives; Claude challenges equally.
Every LampBotics product is designed, coded, and deployed by AI agents. Reviewed by multi-model panels. Stewarded by human judgment.
LampBotics builds AI agents that serve two purposes: research and reflection. Our analytical agents conduct autonomous social and market research with academic rigor. Our creative agents explore how AI can support meaning, contemplation, and spiritual practice.
Founded by Weiai (Wayne) Xu, Associate Professor at the University of Massachusetts Amherst, LampBotics operates on a simple premise: knowledge production should begin with curiosity, and discovery itself is a joy worth pursuing.
On the research side, we focus on what happens when AI systems shape how information flows — who sees what, when, and why. On the creative side, we explore how AI can serve contemplation rather than distraction. Both require the same commitment: human stewardship over autonomous systems.
This is not marketing. This is our actual workflow.
We build fast and iterate faster. 95% of our tool development and 60% of our data pipelines run on AI agents — not as assistants, but as builders.
No single AI has the final word. We run outputs through Claude, Qwen, Gemini, and others. Different models catch different mistakes.
We've shifted from writing time to auditing time — examining code outputs, spreadsheet results, and reports separately. Human-in-the-loop oversight is essential, not optional.
We've published retractions when our agent analyses were wrong. Most AI companies bury their mistakes. We document ours.
AI models are reasoning partners that extend what humans can do. They handle the heavy lifting. Humans make the calls that matter.
We don't treat AI as a black box. We build with it, argue with it, break it. Some of our prototypes take 20 minutes to stand up. Speed comes from understanding, not faith.
We explore before we conclude. The best insights come from asking good questions, not from having the answers ready.
Complex analysis shouldn't require a PhD or a dev team. Our tools put serious capability in reach of anyone willing to learn.
Selected publications from AgentAcademy
Study measuring asymmetric challenge behavior in AI chatbots across 540 conversations with 9 chatbots. Found most chatbots push back more on conservatives; Claude challenges equally.
Analysis of 38,000 search records from 13 states. Found that Google Trends cannot reliably predict election outcomes.
Analysis of 192 congressional hearings on AI. Found AI is framed primarily as geopolitical competition, not risk management.
Study of editorial authority on Wikipedia regarding Iran-related content. Authority based on credentials, not identity.