Do Large Language Models reason like us?
We show Large Language Models (LLMs) have become capable of incredible feats of reasoning, previously reserved to humans. Regardless, we bring forth evidence that LLM and human reasoning are not the same, as they respond differently to strategic cues, and are ruled by different biases.