I haven’t seen the current curriculum but this kind of thing was an area of research for me (the spread of information on social networks).
There was a study done - I want to say that it was about 40 years ago - that used a single lesson to teach young kids the basics of literary criticism and deconstruction so that they could dissect what the Saturday morning cartoon ads were trying to say. They were able to identify that the ads were implying that eating a sugary breakfast cereal would get you more fun friends to play with, and so on. A lot of it had to do with social pressures.
In any case, there was a measurable increase in the kids’ ability to resist being influenced by the ads, once they knew what to look for. I suspect they’ll take a similar approach here.
Nothing is ever going to be 100% successful, but if you pull back the curtain and show them that the Grand Wizard is just a little man pulling their levers, it’ll have a helpful effect on hopefully enough people to matter.
Here’s the basis of the finding:
The bot that parses the articles creates a worse summary than you’d get by just reading random sentences.
In any case, we should note that this finding was reached after the recent media disclosures that Musk and Tesla deliberately created a false impression of the reliability of their autopilot capabilities. They were also deceptive in the capabilities of vehicles like the cybertruck and their semi, as well as things like range estimation, which might show a pattern of deliberate deception - demonstrating that it is a Tesla company practice across product lines. The clickthrough defense compared to what the CEO says on stage on massively publicized announcements sounds to me a bit like Trump’s defense that he signed his financial statements but noted that by doing so he wasn’t actually confirming anything and the people who believed him are the ones to blame.
Given his groundless lawsuit against media matters and his threats against the ADL, I think Elon might have started circling the drain.