> We used novel synthetic data generation techniques, such as distilling outputs from OpenAI o1-preview, to post-train the model for its core behaviors. This approach allowed us to rapidly address writing quality and new user interactions, all without relying on human-generated data.
So they took a bunch of human-generated data and put it into o1, then used the output of o1 to train canvas? How can they claim that this is a completely synthetic dataset? Humans were still involved in providing data.
It may be related to the famous 301 view count where YouTube would stop updating the total pending further verification that the views are legitimate. See: https://youtu.be/oIkhgagvrjI This behavior was later removed in 2015, but I wouldn’t be surprised if something similar is happening here.
This bot answers any prompt, even those completely unrelated to the product (ex: generating code, writing paragraphs, etc) I imagine this could be abused and accumulate unwanted costs. Also, how do you ensure the bot isn’t subject to hallucinations? I would never risk giving a customer wrong information about my product.
How many more platforms do we need where users can post all sorts of media and follow other’s content? Every social media site copies features from another, and they are all basically the same item wrapped in a different package. When will people wake up and return to RSS?
RSS was by techies, and only ever used by techies. Uncontrolled news feeds lacking data collection doesn't sell enough for the giants.
Also RSS only thrives if sites put in the effort to support it. Even still, the majority of online users are on social media platforms, not "surfing the www" like the gold ol' days.
Feels like they were too eager to scoop up Twitter refugees. The platform is still missing basic features and new users turn away immediately. Nobody’s going to join a platform and then keep their account open for 6 months while the platform plays catch-up. The launch would have been much more successful if they waited another month. I understand this is a peak time to capture former Twitter users, but in the long run it may have been a mistake. Will be interesting to see how they market it as things develop.
Well I for one am actually looking forward to relevant ads.
On Twitter I've been shown everything from industrial mining supplies, nipple covers, psychology research papers, super yachts, home shopping network junk and just now an ad for an oral dosing technology conference.
Twitter is positively inundating me with "ads" from people boosting their twitter profiles, all dedicated to crypto, health "hacks", finance gurus, yoga teachers etc etc. I feel like Apple ads were the ones I saw most and now I've not seen an Apple ad in over a week. It really feels like advertisers are all pulling out
I’m finding Apple News actually provides me ads I click, and Reddit did briefly too. Neither of those apps ask nearly what Threads is asking. They are more tailored towards the content being shown though, and I turn down permissions whenever I can.
Anecdotally I’ve never purposefully clicked an ad on Twitter, I think either the buyers or the algorithms are off there.
That's always happened to me; I think if you follow any doctors, it shows you ads for medical conferences, but I can't tell if that's Twitter messing up or the people placing the ads setting the display audiences wrong.
Did Twitter always show you the same drop-shipper ad multiple times on the same thread? I'd be mad if I were an advertiser on Twitter, some of those ad impressions feel fraudulent to me, as a user.
I appreciate that Apple has their privacy practices highlighted in a easy to read card so that developers don’t get to hide it in legalease and a click away in a privacy policy.
The next step would be to actually prompt users about this, in the same way that you would get a prompt confirming that if you would like to download a large app when on mobile data. “It looks like you are trying to install the app Threads which reads the following information about you. Are you sure you would like to proceed?”
This would be a natural progressing of the “Ask not to track” dialog that they implemented awhile ago
or simply add a colored indicator next to the download button. If an app collects too much info, it shows a glowing red exclamation mark; if it collects nothing, it's a green smiley face.
This is stupid. Who on earth would want to pay a subscription for a PC when they still need to buy one to stream the OS?
I see the value proposition, but it’s only as fast (and expensive) as your internet speed.
- Businesses and consumers could buy less expensive hardware, and simply pay $x/month for access.
- IT could remotely troubleshoot
- Fewer hardware issues
- A $10/month subscription would last longer than a $1200 laptop for most consumers.
- MS handles all software updates, stores all your personal data, has complete control of hardware and software.
- Some sort of device is still required, how does it respond to input from a keyboard, mouse, controller, etc? How do USBs, Raspberry Pi’s, External storage devices work?
- You still need a screen. I could see this being an app on smart TVs and mobile devices. (The form-factor of a powerful PC is smaller because of the streaming aspect)
- Internet is still relatively slow, every input would feel laggy and unresponsive.
- Anyone can guess your username/pwd and gain access to your PC.
- Hackers now have a single target.
- If Microsoft’s servers ever go down, you can’t use your PC at all.
- You are subject to any future government regulations with no choice to opt out.
- You lose access to everything if your subscription expires.
- MS has an unprecedented level of access to data that will inevitably be used for advertising.
- MS becomes your ISP and can filter traffic however they want.
So they took a bunch of human-generated data and put it into o1, then used the output of o1 to train canvas? How can they claim that this is a completely synthetic dataset? Humans were still involved in providing data.