This research introduces two innovative frameworks, RE-GAINS and EnChAnT, designed to address the limitations of Large Language Models in tool invocation and chaining. The work tackles the problem of LLMs hallucinating or missing essential steps when working with external tools. RE-GAINS utilizes OpenAI models with specialized prompting based on the Reasoning via Planning (RAP) framework, while EnChAnT provides an open-source solution leveraging LLM format enforcers and ToolBench’s API Retriever. Both frameworks achieve low operational costs (0.01$ per query) while enabling LLMs to effectively chain tools based on expected outputs without requiring actual results from individual API calls. The key contribution lies in enabling scalable tool manipulation using modifiable, externally described tools across various domains.