We are the Hanoi AI Safety Network (HAISN), an organization established in Hanoi, Vietnam that aims towards building safer AI.
We believe current AI development trends are are highly unsafe, posing a real threat of catastrophic risks. Meanwhile, current work in AI safety is inadequate for addressing future AI risks.
This is why HAISN was founded!
We try to help advance the field of AI safety with various activities and programs. Our current activities include:
AI Safety Intro Fellowship: A weekly introductory reading and discussion group to AI Safety for curious university students. Funded by OpenPhilanthropy.
Support: We provide access to learning materials (which includes our community bookshelf), compute to run your AI experiments, pointers to relevant career opportunities, and access to a network of AI safety researchers to support your research and future exploration.
Research Hackathons: We facilitate monthly research hackathons in collaboration with Apart Research. Funded by OddlyNormal.
Whether you’re an AI developer, researcher, or just an enthusiast, please feel free to reach out to us!
Let’s all shape the future of safe and trustworthy AI!