Listing of events related to the Learning Observer
View the Project on GitHub ETS-Next-Gen/learning-observer-events
Talk through risk analysis with the types of data being collect, shared, stored, and analyzed. What are the risks? How do we mitigate them? For the rest of this session, take a privacy-focused perspective.
A few lenses which might help:
History predicts the future.
In 1990, the Internet was a pretty safe place. Usenets posts were viewed as ephemeral, since there was no way to find past content. Emails were openly shared, since spam was not yet common. Of course, all those posts are now indexed, and all those emails are now spammed.
Think through data collected and stored now, and how risk profiles might change if:
Think through how data collected under regimes like Nazi Germany and Soviet Republics. Think through how data collected about kids today, if archived, might be used in a few decades.
We have consistently seen released datasets re-identified, sometimes much later, including the AOL dataset, the Netflix challenge, and Massachusetts medical data.
Think about the recent UN Rohigya data leak. Think about uses (misuses?) of data in China. Think about student targeted by and hiding from militant groups.
What kinds of data protections are needed to keep these students safe?
If an ed-tech system results in a single death, that’s a bad outcome. Think about high-risk individuals, such as:
All of these come up in data. Typing patterns, usage timestampes, or vocabulary frequency and grammatical structure form a unique fingerprint which we are increasingly good at identifying.
On your own, think about the worst-case here.