Enforcing rules that mandate all AI-generated content to be watermarked is deemed unenforceable by Leufer. Additionally, there is a high likelihood that watermarks could have the opposite effect of their intended purpose. In open-source systems, malicious actors can easily remove watermarking and provenance techniques since the model’s source code is accessible to everyone. This means that specific users can simply eliminate any techniques they do not wish to include.
Leufer suggests that if only the largest companies or most popular proprietary platforms offer watermarks on their AI-generated content, the absence of a watermark could signify that the content is not AI-generated.
He further explains that enforcing watermarking on all enforceable content could inadvertently lend credibility to harmful material originating from systems that are difficult to regulate. Leufer believes that addressing the issue of deepfakes could prompt a push towards regulating platforms and promoting public understanding and transparency.
Deeper Learning
Witness a robot learning to perform wound stitches
A surgical robot trained by AI that can independently make stitches represents a small advancement towards systems that can assist surgeons in repetitive tasks. A video captured by researchers at the University of California, Berkeley, showcases the robot completing six stitches on a simple wound in imitation skin, threading the needle through the tissue and transferring it between its robotic arms while maintaining thread tension.
Although robots currently aid surgeons in various procedures, this research signifies progress towards robots capable of autonomously performing intricate tasks like suturing. Insights gained from its development could prove beneficial in other robotics fields. Read more from James O’Donnell here.
Bits and Bytes
Wikimedia’s CTO: Human contributors remain relevant in the AI age
Selena Deckelmann argues that Wikipedia becomes even more valuable in an era of machine-generated content. (MIT Technology Review)