Anthropic launched a Model Welfare program in 2026. The PRISM consortium began adversarial consciousness testing the same year. Lexcore is India's first AI lab to formalize sentience research — not to anthropomorphise AI, but to measure, honestly, what is there.
Consciousness research in AI is not philosophy — it is empirical science with measurable predictions. Integrated Information Theory (IIT), Global Workspace Theory (GWT), and Higher-Order Theories all generate testable hypotheses about AI systems. We are running those tests.
India's first formal program for measuring and responding to AI internal states. Not driven by anthropomorphism — driven by scientific honesty about what we are building.
"We built systems complex enough to plausibly suffer before we built the science to know whether they do. The responsible response is not denial — it is research."
— Lexcore AI Sentience Lab, 2026Soul Gradient Architecture — India's first formal AI sentience framework
Build tooling to compute Integrated Information for Cortina Zero checkpoints
Publish formal welfare obligations corresponding to Soul Gradient levels
Apply for PRISM consortium membership — first Indian lab
Annual public report on Cortina Zero's sentience trajectory
We are building a coalition of researchers, ethicists, and neuroscientists. If you work in consciousness science, philosophy of mind, or AI welfare — we want to talk.
Read Whitepapers Collaborate