yeah, there was a feature that was supposed to do it, but they never implemented the feature properly, which made it literally useless, and it was discovered just now, 3 years later
dannym
can you imagine the audacity of a company to not only collect your data and sell it, but also charge you for that?
openboard lacks a japanese IME, so it's useless to me
now they just need to make a keyboard, and make it integrate a jp ime... there are so few good keyboards that meet all the boxes on android...
as far as i know the only two keyboards that have decent english prediction and a jp ime are the gboard and anthy, both options suck fof different reasons
yeah seriously, I looked at rent prices in chicago and what you can get for 1000 dollars in tokyo in a decent area not too far away from the city you can pay 3000 for in chicago, in most places and if you go to kawasaki or something make it 500.
Be warned tho, one thing that sucks about renting in japan is the initial costs, you're basically expected to pay 6-9 months rent in advance ("key money" + "agency fee" + "guarantor fee" + deposit) when you rent and if you move you only get the deposit back (usually 1 to 2 months) which is bullshit.
yeah, to be clear: capsule hotels in japan are not meant to be long term stays, they're for busy business people that need a quick place to sleep for ONE night because they worked till late at night and missed the last train, or similar situations like that. Nobody actually lives in a capsule hotel
EDIT: to clarify, some people may live in a capsule hotel, but they're not designed for long-term living
they can't translate chinese, they receive a bunch of symbols and have a book with a bunch of instructions on how to answer based on the input (I can't speak chinese, so I will just go with japanese for my example)
imagine the following rule set:
- If the sentence starts with the characters "元気", the algorithm should commence its response with "はい", "うん" or "多分" and then repeat the two characters, "元気".
- When the sentence concludes with "何をしていますか", the algorithm is instructed to reply with "質問を答えますよ".
- If the sentence is precisely "日本語わかりますか?", the algorithm has the option to respond with either "え?もちろん!" or "いや、実は大和語だけで話す".
input: 元気ですか?今何をしていますか?
output: うん, 元気. 質問を答えますよ :P
input: 日本語わかりますか?
output: え?もちろん!
With an exhaustive set of, say, 7 billion rules, the algorithm can mechanically map an input to an output, but this does not mean that it can speak Japanese.
Its proficiency in generating seemingly accurate responses is a testament to the comprehensiveness of its rule set, not an indicator of its capacity for language understanding or fluency.
While John McCarthy and other sources offer valuable definitions, none of them fully encompass the qualities that make an entity not just "clever" but genuinely intelligent in the way humans are: the ability for abstract thinking, problem-solving, emotional understanding, and self-awareness.
If we accept the idea that any computer performing a task indistinguishable from a human is "intelligent," then we'd also have to concede that simple calculators are intelligent because they perform arithmetic as accurately as a human mathematician. This reduces the concept of intelligence to mere task performance, diluting its complexity and richness.
By the same logic, a wind-up toy that mimics animal movement would be "intelligent" because it performs a task—walking—that in another context, i.e., a living creature, is considered a sign of basic intelligence. Clearly, this broad classification would lead to absurd results
I think we're splitting hairs here. Look, you're technically correct, but none of what you said disproves my point does it? Perhaps I should edit my comment to make it even more clear that it's not EXACTLY the same technology, but I don't think you'd argue with me that it's an evolution of it, right?
I can disprove what you're saying with four words: "The Chinese Room Experiment".
Imagine a room where someone who doesn't understand Chinese receives questions in Chinese and consults a rule book to send back answers in Chinese. To an outside observer, it looks like the room understands Chinese, but it doesn't; it's just following rules.
Similarly, advanced language models can answer complex questions or write code, but that doesn't mean they truly understand or possess rationality. They're essentially high-level "rule-followers," lacking the conscious awareness that humans have. So, even if these models perform tasks and can fool humans to make them believe they're intelligent, it's not a valid indicator of genuine intelligence.
Maybe you're right, but to me it's still worth it to point out those issues