Updated Aug. 28, 2024. Take back your privacy Firefox is rolling out Total Cookie Protection by default to more Firefox users worldwide, making Firefox the
LLM and ML generated translations generate a series of tokens individually. That’s why AI Chatbots hallucinate so often, they decide the next most likely word in a sequence is “No” when the correct answer would be “Yes” and then the rest of the prompt devolves into convincing nonsense. Machines are incapable of any sort of critical thinking to discern correct from incorrect to decide whether to use contextual responses.
Those are not examples, just what you claim will happen based on what you think you understand about how LLM works.
Show me examples of what you meant. Just run some translations in their AI translator or something and show me how often they make inaccurate translations. Doesn’t seem that hard to prove what you claimed.
You want examples but you never disclosed which product you’re asking about, and why should I give a damn in the first place? I shouldn’t have to present an absence of evidence of it working to prove it doesn’t work.
Yeah, Waterfox is just another browser built on top of the Mozilla’s GECKO engine. But without all the AI dickriding.
How terrible to offer client-side translation or webpage description for differently abled people!
Client side incorrect translations*
Client side incorrect translations*
How incorrect is it?
Sentences are a lot like math problems. An incorrect part changes the entire outcome.
Yes, so show me how incorrect is their translation, since you claim it to be incorrect.
LLM and ML generated translations generate a series of tokens individually. That’s why AI Chatbots hallucinate so often, they decide the next most likely word in a sequence is “No” when the correct answer would be “Yes” and then the rest of the prompt devolves into convincing nonsense. Machines are incapable of any sort of critical thinking to discern correct from incorrect to decide whether to use contextual responses.
Those are not examples, just what you claim will happen based on what you think you understand about how LLM works.
Show me examples of what you meant. Just run some translations in their AI translator or something and show me how often they make inaccurate translations. Doesn’t seem that hard to prove what you claimed.
You want examples but you never disclosed which product you’re asking about, and why should I give a damn in the first place? I shouldn’t have to present an absence of evidence of it working to prove it doesn’t work.