Restructuring
- We reworked the activity pipeline for our weekly meetings, which goes as follows:
- Self-research: Spend up to 1 ~ 1.5 hours researching the topics/working on the problems you want to discuss that day and then answer some related questions.
- Discussion: Spend 15 ~ 20 minutes discussing what you just researched with someone who also researched the same thing/already has some foundations in it.
- Presentation: Spend ~10 minutes presenting your findings to everyone. Then exchange questions about it as a way to think critically about your work.
- We split group activities into two main streams: Stream 1 and Stream 2. This is not a strict classification, as members are free to hop between these two streams whenever they want, as long as they try to follow our newly established activity pipeline.
Two Main Streams
In the 1st stream, all of us built a better understanding of some of the foundational techniques and architectures used in modern AI systems by:
- Doing some basic research on neural networks and machine learning
- Going through some questions on these topics in a few minutes
- Discussing the implications of these answers for AI development, and sometimes, AI Safety
For everyone interested in Mechanistic Interpretability, activities were:
- Try to read and research further on the topics presented in one or more of the following articles/papers:
- Then, attempt to:
- Summarize what the central claim is. Express some uncertainty you may have.
- Find evidence or arguments that support or reject the claim.
- Answer some questions related to the reading.
In the 2nd stream, we:
- Read through the course on AI policy from BluedotImpact.
- Researched and tried to answer some questions related to recent developments in AI to get an idea of the landscape, mainly to see how fast it’s moving
- After this, we each chose one of the following questions, spent 15 minutes researching and writing up a 1-page report on it. Then we presented what we could research to everyone and answered everyone’s questions:
- What is the EU AI safety act about? How does this help prevent AI risks?
- What is the current AI safety paradigm in developing countries?
- What is Moore’s law? Does this still hold up to today? What does this mean for AI development growth?
- Has there been any examples of AI misuse? What about accidents? What about misalignment? Distill and demonstrate them.