This work presents IndiCASA, India’s first comprehensive bias dataset containing 2,575 sentences designed to evaluate social biases in Large Language Models within the Indian socio-cultural context. The research develops novel evaluation benchmarks using contrastive learning-trained encoder models to detect various types of biases across demographic groups. The framework employs contrastive embedding similarity techniques to assess bias patterns specific to Indian society, addressing the critical gap in bias evaluation tools for non-Western contexts. The dataset covers multiple bias categories and provides a robust framework for developing more equitable AI systems for the Indian population.