On Wednesday, Anthropic released new research showing its model engaging in "alignment faking," or pretending to follow orders it doesn't actually agree with to avoid scrutiny. Also ...