I’ve spent the past few days asking AI companies to convince me that the prospects for AI safety have not dimmed. Just a few years ago, it seemed that there was universal agreement among companies, legislators, and the general public that serious regulation and oversight of AI was not just necessary, but inevitable. People speculated about international bodies setting rules to insure that AI would be treated more seriously than other emerging technologies, and that could at least provide obstacles to its most dangerous implementations. Corporations vowed to prioritize safety over competition and profits. While doomers still spun dystopic scenarios, a global consensus was forming to limit AI risks while reaping its benefits.
FT Edit: Access on iOS and web
,推荐阅读新收录的资料获取更多信息
Code dump for 2.16
For founders seeking funding, the more useful question is not how to win recognition, but how to build trust. What would make a skeptical scientist take the work seriously? What would make a clinician believe it belongs in practice? What would make outcomes measurably better at scale? These are the questions that shape conviction in longevity investing, even if they are less glamorous than a trophy.
,这一点在新收录的资料中也有详细论述
Here’s the problem: Those Big Five control over 80% of the trade publishing market. Indie publishers exist, but they need more support—a lot more support—than they’re getting.
During development I encountered a caveat: Opus 4.5 can’t test or view a terminal output, especially one with unusual functional requirements. But despite being blind, it knew enough about the ratatui terminal framework to implement whatever UI changes I asked. There were a large number of UI bugs that likely were caused by Opus’s inability to create test cases, namely failures to account for scroll offsets resulting in incorrect click locations. As someone who spent 5 years as a black box Software QA Engineer who was unable to review the underlying code, this situation was my specialty. I put my QA skills to work by messing around with miditui, told Opus any errors with occasionally a screenshot, and it was able to fix them easily. I do not believe that these bugs are inherently due to LLM agents being better or worse than humans as humans are most definitely capable of making the same mistakes. Even though I myself am adept at finding the bugs and offering solutions, I don’t believe that I would inherently avoid causing similar bugs were I to code such an interactive app without AI assistance: QA brain is different from software engineering brain.。新收录的资料是该领域的重要参考