OpenAI News
Deep research System Card
Quick Summary
"This report outlines the safety work carried out prior to releasing deep research including external red teaming, frontier risk evaluations according to our Preparedness Framework, and an overview of the mitigations we built in to address key risk areas."
This article was originally published by OpenAI News. You can read the full, in-depth story at the source below.
Read Full Story at OpenAI News