Timeline of LLM Developments and Privacy Leaks
-
-
ChatGPT is released. Reaches 1 million users five days after
launch
-
- OpenAI reaches 100 million active monthly users.
-
Microsoft announces collaboration with OpenAI to enhance Bing
Search
-
-
Popularity of research involving jailbreaking models to bypass
safety erupts
-
Behnia et al.
prototype a framework for privately fine-tuning large language
models with differential privacy
-
- ChatGPT releases developer API
- OpenAI rolls out ChatGPT plugins
-
-
Italy bans ChatGPT for non-compliance with GDPR, for
collecting personal data without consent
as part of the model training data.
-
OpenAI suffers its first major data leak due to a third-party
technical dependency
-
-
OpenAI adds data controls allowing users to opt-out of their
data being used in future model training process
-
-
Privacy regulators of Canada collectively begin an inquiry into
OpenAI
-
Li et al.
find privacy leaking capabilities of ChatGPT using jailbreaking
-
Samsung bans employee ChatGPT usage after corporate data is
leaked. Apple soon follows suit.
-
-
Liu et al.
uncover unexpected and severe outcomes using black box prompt
injection attacks on integrated LLM applications including
WriteSonic and Notion
-
- Meta releases open-source Llama-2 model
-
-
FTC begin to investigate if OpenAI infringed on consumer privacy
protections and whether ChatGPT has spread misinformation
-
-
Iqbal et al.
develop attack taxonomy and test it on ChatGPT plugin ecosystem
showing potential privacy risks
-
- OpenAI hosts community of over two million developers
-
-
Google researchers discover the possibility of utilizing
keywords to cause ChatGPT to leak and release information from
its training data which wasn’t intended for disclosure
-
-
Suo et al.
examine the concept of “signed prompts” to allow LLMs to
distinguish between regular prompts and prompt injection attacks