this post was submitted on 09 Mar 2024
62 points (94.3% liked)
Technology
59374 readers
7033 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sounds like a great idea. Plain English (or any human language) is not the best way to store information. I've certainly noticed mismatches between the data in different languages, or across related articles, because they don't share the same data source.
Take a look at the article for NYC in English and French and you'll see a bunch of data points, like total area, that are different. Not huge differences, but any difference at all is enough to demonstrate the problem. There should be one canonical source of data shared by all representations.
Wikipedia is available in hundreds of languages. Why should hundreds of editors need to update the NYC page every time a new census comes out with new population numbers? Ideally, that would require only one change to update every version of the article.
In programming, the convention is to separate the data from the presentation. In this context, plain-English is the presentation, and weaving actual data into it is sub-optimal. Something like population or area size of a city is not language-dependent, and should not be stored in a language-dependent way.
Ultimately, this is about reducing duplicate effort and maintaining data integrity.
This problem was solved in like 2012 or 2013 with the introduction of Wikidata, but not all language editions have decided to use that.
How common is it in English? I haven't checked a lot of articles, but I did check the source of the English and French NYC articles I linked and it seems like all the information is hardcoded, not referenced from Wikidata.
I think enwiki tends to use Wikidata relatively sparingly.
Some people like their little power they call "meritocracy" to decide what belongs in the article and what doesn't.
Disclaimer, I didn't do any research on this, but what would be bad with just AI translating text, given a reliable enough AI? No code required, just plain human speech.
This will help make machine translation more reliable, ensuring that objective data does not get transformed along with the language presenting that data. It will also make it easier to test and validate the machine translators.
Any automated translations would still need to reviewed. I don't think we will (or should) see totally automated translations in the near future, but I do think the machine translators could be a very useful tool for editors.
Language models are impressive, but they are not efficient data retrieval systems. Denny Vrandecic, the founder of Wikidata, has a couple insightful videos about this topic.
This one talks about knowledge graphs in general, from 2020: https://www.youtube.com/watch?v=Oips1aW738Q
This one is from last year and is specifically about how you could integrate LLMs with the knowledge graph to greatly increase their accuracy, utility, and efficiency: https://www.youtube.com/watch?v=WqYBx2gB6vA
I highly recommend that second video. He does a great job laying out what LLMs are efficient for, what more conventional methods are efficient for, and how you can integrate them to get the best of both worlds.
Thanks! I'll come back to this thread once I read more.
Here is an alternative Piped link(s):
https://www.piped.video/watch?v=Oips1aW738Q
https://www.piped.video/watch?v=WqYBx2gB6vA
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source; check me out at GitHub.