"This week in the ➡️ #Metaverse" #34
Your weekly roundup of what’s happening in the digital land(scape). Metaverse ethics, governance and interesting trends from crypto, AI, gaming and regulation.
👁️🗨️What's going on:
Microsoft Teams is adding 3D avatars for people who want to turn their webcams off
Decentralized Social Media Rises as Twitter Melts Down
“Mastodon is hardly the Fediverse’s only offering. There’s also PeerTube, a decentralized video-sharing system; PixelFed, a similarly open-servered image-sharing service; Friendica, a more Facebook-like social media system; and Funkwhale, a community-driven music platform.”
Public notice board: Its #GDC week!! If you are attending the Game Developers Conference, join us for two panels: “Building Better Digital Ethics and Laws: Meatspace meets Metaverse”, and “The Current and Future Ethical Challenges in Games: XR and the Metaverse”
🤖I Am Not A Robot: AI news
In on of the biggest news for generative AI, OpenAI just released GPT-4
And in its journey from non-profit to very-profitable license deals with Microsoft, OpenAI has also transitioned from open to opaque as well.
Citing concerns over safety (and at the same time protecting by secrecy the core of its business), GPT-4 Technical Report states:
2 Scope and Limitations of this Technical Report
This report focuses on the capabilities, limitations, and safety properties of GPT-4. GPT-4 is a Transformer-style model [33] pre-trained to predict the next token in a document, using both publicly available data (such as internet data) and data licensed from third-party providers. The model was then fine-tuned using Reinforcement Learning from Human Feedback (RLHF) [34].
Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.
Basically this approaches nukes public scrutiny over how the traning data was gathered and its sources, which is highly problematic in terms of AI ethics, considering the need for accountability, traceability, interpretability, and transparency to fight biased outcomes.
If you are interested in reading more about an ethics framework for AI and its different dimensions of impact in society, you can head here.
In trust and safety terms, also it is not reasuring to know the results coming from early GPT-4 and the model at launch.
From what we have seen from previous examples with ChatGPT and BingAI, these mitigations always end up having a workaround that allows the result of the initial training to resurface.
Also, the call is coming from inside the house, with Ilya Sutskever, OpenAI’s chief scientist and co-founder expressing that“I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.”
Too close for comfort, META’s LlaMA got a leak. As reported by Wired, “On March 3rd, a downloadable torrent of the system was posted on 4chan and has since spread across various AI communities, sparking debate about the proper way to share cutting-edge research in a time of rapid technological change.”
In a more optimistic note, Midjourney has released its long awaited v5, with improved capabilities to improve hands. Also, Midjourney launched a monthly physical magazine showcasing the art of the community:
Stanford’s Alpaca shows that OpenAI may have a problem, researchers train a language model from Meta with text generated by OpenAI’s GPT-3.5 for less than $600 – and achieve similar performance.
And also, researchers at the University of Chicago developed Glaze, a tool that provides a “cloak style’, to create specific perturbations which are added to the original image. The noise is not easily detectable by the human eye, but will be picked up by the feature extraction algorithms of generative AI.
This unleash even more complexity on copyright, as might configure a way to incorporate a DRM layer.
US Code § 1201, makes it unlawful to circumvent technological measures used to prevent unauthorized access to copyrighted works, and that criminalization could be generalized to other types of AI training, wihc might have a huge impact beyond AI art, proving problematic for known AI ethics issues on transparency and traceability (the black box problem).
🤦The WTF award of the week goes to…
A couple got married in Decentraland sponsored by Taco Bell.
“They had to set up a simultaneous livestream of themselves on YouTube in order to meet a legal requirement for their real faces to be visible. That’s because some jurisdictions—including Utah, where their officiant was based—recognize remote weddings as legally binding only if the participants are viewable on video”
🎮Gaming news
Story Machine demonstrates how AI is helping game development
Microsoft offers remedies to EU to win approval for ABK acquisition
European Union's antitrust regulator will make a final decision on $68.7 billion deal by May 22.
💸CryptoLand:
Meta is killing NFT support on Facebook and Instagram. It comes almost exactly a year after Zuckerberg announced the push into “digital collectibles.”
😬Meme corner:
✍️Ending nugget:
“When we published Stochastic Parrots 🦜 (subtitle Can Language Models Be Too Big?) People asked how big is too big? Our answer: too big to document is too big to deploy.”
Emily M. Bender, professor in the Department of Linguistics, U. Washington
That’s all for now! Thanks for reading this new edition. If you want to continue the conversation, find me at @https://twitter.com/whoisgallifrey And if you want to support my work, please do so by sharing this newsletter!!
Thanks for reading This week in the ➡️ #Metaverse! Subscribe for free to receive new posts and support my work.