top of page

Technical Artificial Intelligence
Safety in the UAE 

A regional problem profile and career guide

unnamed (1).jpeg
Written by Sneheel Sarangi
25-min read

Two-Minute Summary

Unaligned artificial intelligence (AI) is an artificial intelligence that behaves differently than what its programmer intends. As AI research progresses, there is a high chance that in the next few decades an unaligned AI will have the power to make decisions that affect all of humanity. There is also reason to suspect that the decisions made by these systems will not adhere to human values, in the process causing massive harm. The amount of people working on this problem remains small, however the relatively minimal amount of research done has already shown significant results. This problem thus has a large scale, is highly neglected, and tractable.

 

While the United Arab Emirates might not be considered a technology hub, it has started banking on Artificial Intelligence (AI) to lead its post-oil growth. It is eager to become a leader in the AI space, and has a track record of adopting AI technologies. Adopting an unaligned AI system will however result in massive losses, both for the people and the economy. The country has also significantly invested into AI research, such as inaugurating the world’s first AI-focused university, and establishing AI research labs. However, AI-safety related work remains largely neglected. Nonetheless, principles such as “developing AI aligned with human values” feature in the official government toolkits, and thus there is potential for the government to support efforts in that regard.

 

People in the UAE do not have many locally available ways to contribute to solving this problem, as most work is done in Western countries. We primarily recommend AI safety work to students who are ready and determined to pursue a Ph.D. in a top institution outside the country; most roles in the area require a Ph.D., preferably in machine learning. We also recommend this role to engineers highly skilled in ML or those who have strong software engineering skills, and are willing to relocate to countries such as the US or UK. However, given the UAE’s vision of becoming a ‘hub of AI’, it might also be valuable to do a Ph.D. locally and become an AI researcher who is familiar with key regional developments in the field of AI alignment. Lastly, there remains the possibility of gaining relevant credentials and starting a new AI safety lab at an institution in this region. However, the latter suggestion comes with several caveats, and further research needs to be conducted regarding it.

 

The following sections expand on the analysis of the AI safety cause area and then present a detailed career guide with suggestions tailored for those in the UAE.

Introduction

Unaligned Artificial Intelligence (AI) is a problem not well known amongst the general public, yet several experts and high-profile technologists have cautioned that it is one of the most pressing problems humanity might face. The Effective Altruism (EA) community believes that a career in trying to solve AI alignment might be the most impactful thing one can do. This problem profile presents a brief overview of the problem and analyzes it in terms of its scale, neglectedness, and solvability/tractability. Some parts have been adapted from the problem profile published by 80,000 Hours, an EA-aligned org (available here). Most of the advice that can be found in the career guide on 80,000 Hours and other sources is intended for a Western audience. This problem profile, and the accompanying career guide, aim to both be an introductory resource to AI safety for those not acquainted with Effective Altruism, and provide more contextual information and career advice for people in the UAE.

 

This problem profile touches upon several philosophical concerns for tackling the problem and can be used as an introduction to the problem of AI safety. To get an idea about the technical work being done in the field, one can look at publications by organizations such as CHAI, MIRI, and OpenAI’s AI safety team. A non-exhaustive list of organizations that undertake research work in the field can be found here.

What is AI safety?

The term AI safety can refer to several different risks with the development of AI. Short-term accidental AI risks include crashes caused by self-driving cars or cases of misuse such as malicious DeepFakes. There are also longer-term misuse risks, such as the potential use of AI in surveillance by authoritarian governments. However, the type of risk this document will address is the long-term accidental risk associated with AI, that is, the building of superintelligent AI, or highly capable generally intelligent AI, having goals that bring harm to us. [1] 

 

AI today can perform some tasks exceptionally well, such as playing a game of chess. Most AI systems in widespread use today are only good at specific tasks they have been trained on, so they are known as narrow AI. In other words, these AIs become superhumanly good at games they were trained how to solve, but do not know anything about performing any other tasks. It is possible and likely though, that in the near future, we will come up with an AI that can perform all tasks that a human can perform as well as a human can, without any task-specific training. We call this type of AI Artificial General Intelligence (AGI). An AGI, if it comes to exist, has the potential to help mankind immensely by automating several tasks and increasing overall productivity and wellbeing. At the same time, it comes with tremendous risks.

 

To introduce the problem of AI safety, Stuart Russell (head of the Center for Human-Compatible AI and professor of Computer Science at the University Of California, Berkeley) gives the example of the Greek myth of King Midas, who wished anything he touched turn to gold. [2] Upon having his wish granted, however, he died of starvation as his food and water turned to gold. In simpler words, while King Midas’s literal objectives were satisfied, the outcome he got clearly wasn’t one he desired. 

 

The development of AI brings to our face a similar problem: Recent research has shown several instances where an AI system does not behave how they were expected to. Take for example the Coast Runners game, where an AI that was tasked with finishing a boat race as quickly as possible ended up moving around in a circle island never finishing the race. Here’s a basic intuition as to what leads to cases like these:

 

  1. AI systems are great at learning how to get better at attaining specific goals.

  2. It is hard to break down a lot of the tasks we want such systems to perform into smaller AI-understandable goals. This leads AI researchers to make the AI learn approximations of the real goals.

  3. Things go wrong when these approximated goals do not actually encode our desired results. Thus, much like the case of King Midas, the AI satisfying its objective leads to a clearly undesired outcome.

While contained instances like the ones above seem harmless, with the development of powerful AGI systems (or superintelligent systems later on) that are given more power, the ‘unintended consequences’ may cause large amounts of harm. If we do not find ways to encode human values, for example, the ‘shortcuts’ powerful systems take might show disregard for human life or values humans hold. 

 

This problem of creating an AI that is in line with our values is called the alignment problem. If an AGI or superintelligence is developed before we can solve the alignment problem, we might not be able to control it. As Richard Ngo puts it: “the key concern … is that we might build autonomous artificially intelligent agents which are much more intelligent than humans, and which pursue goals that conflict with our own.” This can lead to a situation where the AI in pursuit of some goal, ends up causing events as catastrophic as human extinction.

Consider the following thought experiment:

 

Nick Bostrom, director of the Future of Humanity Institute, asks us to imagine an unaligned AGI is given only one goal: to maximize the number of paperclips. [3] This paperclip maximizing AGI will first pursue the goal of increasing its intelligence, solely to get better at its given task. It could then lead to a scenario where it transforms much of the Earth’s surface into paper clip manufacturing facilities, with no regard for human needs–or even for human lives–which came in the way of the manufacture of paper clips. An aligned AI would consider our values, and only choose to build such factories on wastelands, but the unaligned AI will find razing human settlements a significantly easier way to achieve its goal.

 

As outlandish as this scenario seems, we could be hurtling fast towards the day where an AI could be capable of making such decisions. A 2014 survey of the top 100 most cited researchers in AI found that more than half the respondents assigned a 50% or greater chance for high level machine intelligence (HLMI) being created by 2050. They also assigned a significant probability that superintelligence could develop within 30 years after HLMI. [4]

 

Technical AI safety is the field that works to solve the alignment problem, i.e. researches ways such that AI can be developed safely. The following sections analyze the impact and importance of such research within AI, and attempt to show why working in the field is highly impactful.

Resources to learn more

For a more technical introduction to the topic:

 

Measuring the problem

To analyze the problem of AI safety more rigorously, we use the scale/neglectedness/tractability framework that has been adopted by many EA organizations, such as 80,000 Hours. In short, scale refers to the total impact solving the problem will have, neglectedness is related to how many people are already working on the problem, and tractability is about how much progress we can make on a problem by working on it.

Scale

An existential catastrophe is defined as an event resulting in the destruction of humanity’s long term potential. [5] Toby Ord, a senior research fellow at Oxford University and co-founder of the EA movement, estimates that the probability of unaligned artificial intelligence causing an existential catastrophe within the next 100 years is 10%. This scenario doesn’t necessarily mean extinction, but can also mean a future where a superintelligent AI takes control over humanity - limiting it from growing to its full potential. 

 

The world population is expected to be around 9.8 billion by 2050, and 11.2 billion in 2100. [6] Moreover, we expect the Earth to remain habitable for at least 600-800 million years more. Everything we value can only be sustained and increased as much as possible if our future generations are able to survive and flourish. Thus the risk of superintelligent AI harming this future potential of humanity makes the scale exponentially larger.

 

In the UAE, AI is already being implemented to an extent that it plays a part in everyday life, and the country's vision for the future entails it playing an even more prominent role. Therefore, it is likely that when high-level AI like AGI is finally developed, the UAE will be eager to deploy it. However, if AGI is not properly aligned with human values, its adoption might result in wide-ranging, unintended side effects that harm both the people and the economy.

 

Solving the alignment problem will reduce the risk of existential catastrophe due to unaligned or misaligned AI, therefore making the lives of millions of future generations better and allowing humanity to achieve its full potential.

 

Neglectedness

In 2017, only around $6 million of funding was directed into technical AI safety. Recently the field has been receiving more funding, in part due to the growing EA movement. More recent estimates peg annual AI safety funding at around $100 million. [7] In comparison, Cancer Research UK, a single UK-based charity, allocated around $560 million of funds towards cancer research in a year. [8] The National Cancer Institute in the US allocated around $6.4 billion. [9] These figures are 5.6 times, and 6400 times the funding towards AI safety, respectively. Taking into account the scale of the problem of AI safety, it becomes clear how the current level of funding is extremely small.

 

Moreover, since more funding is slowly being diverted into the field, the problem of talent constraint has come to the fore. In 2017, it was estimated that at most 100 researchers were working on technical AI safety research. [10] Even with the significant growth of the field it remains one of the priority paths that 80,000 Hours recommends its audience to consider having a career in. The high level of specialization required to work in the field further compounds the problem. The accompanying career guide hopes to provide a more in-depth guide to persons in the UAE/Middle East interested in working in the field.

 

Within the UAE, AI research has not focused on technical AI safety -- we could not identify specific individuals in the region working on the problem. The neglectedness of this cause is especially apparent because the UAE has recently taken initiative in becoming an “AI Hub”, and is leaning more towards supporting AI research and bringing in academic talent. [11] However, the research being carried out seems to focus solely on industrial applications. For example, many of the projects from the recently established AI research university, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), also seems to be primarily focused towards using AI research for industrial applications.

Tractability

80,000 Hours thinks that a doubling of effort in AI safety would reduce the size of this existential risk by around 1%. Although such a reduction might seem trivial, given the potential scale of the problem, we can expect this to be extremely good. We can understand this using the concept of expected value. [12]

 

By marginally reducing the probability of the occurrence of an extremely bad outcome, in this case the loss of all future human potential, we make a large positive impact on the future of humanity. More concretely, continuing our assumptions from the previous sections and factoring in other extinction risks, let’s say that humans will survive for at least 5000 future generations. Reducing chances of premature extinction by 1% will then, in expectation, amount to saving 50 future generations. [13] If each generation has 10 billion people, this means that we'd expect 500 billion more people to live full lives.

 

Some prominent computer scientists have questioned the tractability of working on AI safety or the effects of the alignment problem. These questions have given ways to debates about AI safety amongst some researchers. Some popular concerns regarding solvability or tractability and replies to them, taken from Stuart Russell’s book Human Compatible and from the 80,000 Hours problem profile are given below:

 

  • "It is too soon to worry about it, the techniques used when we develop AGI will differ significantly from what we have now."

 

The simplest reply to this is that we don’t know how long it will take us to solve the alignment problem. It might be the case that the insights required for a solution come one after another. Moreover, if the current methods to bring about AI safety turn out to be ineffective, building a community that has experience in trying out different methods is important for the time we actually find ourselves capable of solving it.

 

  • "We can switch it off if it does not work as intended."

 

This is certainly possible in the AI systems present today. However, superintelligent AI will already have calculated this possibility as a potential threat to its ability to pursue its goals. Therefore, it would have taken steps to ensure that either it is not turned off, or that it is not possible to turn it off.

 

  • "It could be very hard to solve."

 

There are several global problems for which this argument can be made effectively. [14] However, AI safety is a young field and there is not yet a clear idea of how hard the problem is to solve. Moreover, there has been continuous progress in the field that has also shown results.

 

Some other common myths about technical AI safety research are discussed here.

The Effective Altruism community, at large, sides with those who worry about the problem of AI safety. In a survey conducted in 2017 amongst researchers who published at top machine learning and artificial intelligence conferences, 47% of respondents said that AI safety should be focused on more, and 70% agreed that Stuart Russell’s explanation of the risk of high level machine intelligence pointed at an important problem. [15] Overall, there are lots of promising research directions being pursued by different organizations. Even if most of these directions don’t guide us toward a solution, the insights gained will still be immensely useful.

 

Why the UAE

While the United Arab Emirates might not be considered a technology hub, it has started banking on Artificial Intelligence to lead its post-oil growth. There has been an upward trend in such investment in recent years: Between 2008-2018, the UAE invested $2.15 billion in AI, second only to Turkey in the Middle East and Africa region. By 2031, the UAE aims to become one of the top AI hubs in the world; [17] to pursue this aim, it became one of the first countries to unveil a strategic plan for AI and set up a ministry for Artificial Intelligence. The Middle East as a whole is expected to see approximately USD 320 billion in benefits from the use of AI by 2030; the annual growth in the contribution of AI is expected to range between 20-34% per year across the region, with the fastest growth in the UAE.

Moreover, the UAE has acknowledged the importance of the positive development of AI. The minister of State for Artificial Intelligence, Omar Bin Sultan Al Olama has discussed the need to address short-term AI safety concerns. Smart Dubai, a government organization, recently also released an ethical AI toolkit that it recommends for organizations: Under the AI principle of ‘Humanity’, the toolkit states that AI should be developed to align with human values, and that the government will support the research on the beneficial uses of AI. It also states that AGI and superintelligence, if developed, should serve humanity as a whole, and the government’s official guidelines for Artificial Intelligence reflect this. However, there seems to be no ongoing research related to the field in the region. These factors and others combined have made the UAE place 19th globally, and 1st in the MENA region on the Government AI Readiness Index 2021.

 

Objective 5 of UAE’s 8 strategic AI objectives is to attract and train talent for future jobs enabled by AI. Given the increase in focus on AI education and the expected increase in AI-based jobs in the region, residents of the UAE will have the opportunity to build career capital in the field. This can eventually lead to finding work in roles such as that of an ML engineer at an organization working on technical AI safety. Moreover, while universities in the region do not have as robust of an AI research culture as those in Western countries, they can still provide a launchpad for those interested in AI/AI safety research. The Mohamed bin Zayed Institute of Artificial Intelligence (MBZUAI) is an AI Research-focused university for Masters and Ph.D. students, while other universities including NYU Abu Dhabi, American University of Sharjah, Khalifa University, and others also have researchers working on AI. Even though AI safety is not a research focus for most researchers in the region, their research spanning fields such as Natural Language Processing, Computer Vision, and Machine Learning can help students gain relevant experience.

 

The career guide in the sections afterward details such opportunities in the field and pathways for working on AI safety for interested individuals.

 

Career Guide

 

Local evaluation

The UAE aims to become an AI hub in the next decade, and there is currently considerable funding dedicated to AI research. AI safety research is currently not a focus of funding in the country, which is why the majority of opportunities reviewed here are not based in the Middle East/UAE. However, given the efforts the UAE is concentrating on AI, it is not unreasonable to expect that AI safety will be included in its research agenda in the foreseeable future. We will update this guide with more local and international opportunities as they become available.

 

Pathways to working in AI safety

There are three general pathways to having a career that contributes to technical AI safety:

 

  • AI Safety Researcher: An AI safety researcher decides on research directions, and comes up with ideas on how to tackle problems in AI safety. For example, researchers at OpenAI have come up with approaches such as AI Safety via Debate to help train AI systems in performing advanced tasks while remaining aligned with human preferences. Though not compulsory, this role often requires having a Ph.D. in a relevant subject. A review of a career as an AI safety Researcher can be read here.

 

  • Machine Learning (ML) Engineer: ML engineers at AI safety organizations implement the ideas that researchers come up with. Their tasks can include implementing or debugging ML algorithms, curating datasets for fine-tuning models, or setting up and using evaluation metrics for ML models. Although a Ph.D. is not required for this role, a Master's degree in Machine Learning may be necessary in some cases. There is also scope for software engineers to transition to this role. A guide to a role as an ML engineer at an AI safety organization can be found here.

 

  • Software Engineer: Recently, the growth of the AI safety field has spurred a demand for software engineers. Software engineers help optimize and secure the procedures at an AI safety organization and generally do not require prior ML experience. Their role includes building interfaces to better run AI safety experiments and working on other pipelines to effectively run the experiments. It is different from that of an ML engineer as it does not involve directly implementing ML algorithms, or working on implementing research projects. The specific tasks you will be doing for such a role are often organization-dependent; for one example, the job description for a role at Ought - an empirical AI safety organization - is available here. An in-depth review of having a career as a software engineer for the purpose of having a high impact can be found here.

 

Who do we recommend it for?

Even though the field of AI is slowly growing in the UAE, as of now the country’s research output remains low when compared to several other countries. Most work in AI safety is currently being done in the West and as such there are limited opportunities to work directly on the problem in this region. Therefore, we would primarily recommend this career path to students in high school and undergraduate institutions who are highly motivated and ready to move to countries such as the US, UK, Canada, or Australia for higher education. 

 

However, there might be several opportunities in the region to gain the required career capital and transition into AI safety Engineering and research roles. Even though we could not find anyone from the region working in these roles, we recommend highly skilled early-mid career professionals to explore the possibility of transitioning into such roles.

 

What to do if you are a:

High school student

Being a high school student, you have a lot of time to decide on what you want to do. Start by testing your fit for AI safety, and decide if you want to work in it. One of the next key priorities should be choosing and applying to undergraduate institutions. 

 

The undergraduate institution might be less important for a job in the industry as an ML engineer or software engineer. However, it can become an important factor when being considered for graduate school. A highly regarded institution will have better research facilities and faculty whose research you could contribute to. Moreover, studying in countries such as the US, UK or Canada might also make it easier to get work visas after graduation since a lot of the work relevant to AI safety is done in those regions. Nevertheless, it should be noted that most organizations working on AI safety directly sponsor work-visas for the same purpose.

 

Open Philanthropy offers scholarships for students interested in having careers that benefit humanity and are interested in starting a degree at a top university in the US or UK. UAE nationals should also look out for scholarships from the government.

 

Given the heavy investment in AI and AI education, and looking at the future landscape, studying at a local university might also be a good idea. A list of local universities that offer undergraduate degrees in AI or AI-relevant fields can be found here. The UAE government also offers some programs for residents to get acquainted with AI concepts.

 

If you want to go further and learn more about AI and AGI safety, learning how to code (if you don’t know how to) can be a good first step. For more advanced students, resources in the "Getting Started" section (found below) often do not need a strong technical background to be understood.

 

First-through-third year undergraduate

Students interested in technical AI safety research are encouraged to take majors in technical subjects such as Mathematics and Computer Science, and these can provide a strong background for both research and engineering work. 

 

Undergraduates should also focus on building up their ML/AI knowledge through avenues such as online courses and university classes. They might also benefit from placing an emphasis on learning practical skills early by participating in events such as AI hackathons.

 

Once a base level of knowledge is built, students interested in research can reach out to professors whose work they are interested in and volunteer to work in their labs. Professors are often in need of student assistants, and students should not hesitate to reach out to them. This can lead to publications, which have become a necessary component of successful elite Ph.D. applications.

 

Students interested in ML engineering and software engineering can replace this with local or international internships in AI or software engineering respectively. Keep in mind that ML engineers will also often need to have strong software engineering skills too. There is also a lot of general software engineering advice online such as practicing algorithmic questions for software engineering interviews that students should follow.

 

There are also several organizations that offer summer or semester-long internships in AI safety; though, like jobs, these tend to be concentrated in countries such as the US and UK. Nonetheless, these typically to welcome international applicants and can be an avenue to familiarize yourself with the work being done in AI safety. A non-extensive list of internships that have been offered in the past is curated here.

 

Final year undergraduate

Final year undergraduates will be looking at an important decision in their careers in choosing between industry work, and further education. For those interested in focusing on AI safety for their undergraduate research projects (or capstone projects), Effective Thesis can offer relevant resources and coaching.

 

If you believe you have amassed a strong research background during your undergraduate degree, applying for a Ph.D. at a top institution in the world might be a good idea. Make sure to test your fit, and refer to the "Getting a Ph.D." section (found below).

 

For students interested in getting a Ph.D., but have relatively less research experience, getting a Master's degree first can be a good choice, where you can focus on making your research profile stronger. This path is also recommended for those interested in ML engineering. 

 

A third pathway to look at could be AI residency programs offered by corporations such as OpenAI, Meta, and Google. These one-year programs often pair residents with experienced researchers, create research-oriented environments, and become a pipeline for later jobs at these companies.

Those interested in software engineering or ML engineering roles can start by applying to local jobs in the fields. Getting a job in one of the countries with a focus on AI safety such as the US or the UK is not impossible but likely to be highly competitive. 

 

Ph.D. student

Students working on a Ph.D. in a field related to Artificial Intelligence, especially those that are in the earlier years of their Ph.D., can work on finding ways to direct their research in an AI safety relevant direction. The organization Effective Thesis offers coaching if you are interested in working on AI safety as a research field. If you are unfamiliar with the field, resources in the "Getting Started" section (found below) can help. Getting some experience by working at AI safety labs can also be valuable.

 

More local opportunities to work in fields relevant to AI safety might come up as time goes on; currently the SPriNT-AI Lab at MBZUAI does some work in related fields. Moreover, research internships at top tech companies internationally might also be a good idea. 

 

ML Engineer

If you already work as an ML engineer at an organization, it is possible that most of your skills will be transferrable to an AI safety-focused organization. Before applying to such organizations, going over key research papers in the field and reaching out to people in the teams you want to work with is a good plan. A brief guide to transitioning into the role of an ML engineer at an AI safety-focused lab can be found here.

 

Software Engineer

AI safety organizations are also often in need of software engineers and people in other technical roles. Some of these roles also often don’t require familiarity with machine learning and other AI concepts and can be a way to have an impactful career in AI without the technical expertise. In certain cases, it can also be possible for software developers to learn the required machine learning/AI concepts through training sessions offered by their organizations, and slowly transition into an ML engineering role. A guide to transitioning into an ML engineering role at an AI safety organization can be found here.

 

Software engineers in the UAE can consider applying to such roles in AI safety organizations internationally. Opportunities can often be found advertised on the 80,000 hours website, or on the organizations’ websites.

 

Getting started

With Artificial Intelligence and Machine Learning

Some courses generally recommended include:

 

With AI safety

This program is annually held by EA@Cambridge and can be a great resource for someone getting started in the field. If you do not have time to go through the program, going through the curriculum will also be highly useful.

 

There also exist some other selective programs that have been offered in the past, that may be a good choice if you are interested in getting into the field, especially as a follow-up to other programs such as the AGI Safety Fundamentals program:

 

We would also recommend more advanced students follow the AI Alignment Forum for general discussions and updates about the community and opportunities. AI Safety Support also has lots of resources that you can go over. The Alignment Newsletter is another good resource to follow.

 

Internships

AI safety

For students interested in the field of long-term AI safety research, an avenue to explore the field further can be through internships. While no AI safety internships are currently available in the UAE, some international internships are listed below. Keep in mind that most of these will require some background research experience or machine learning experience:

 

The CHAI Lab at the University of California Berkeley opens internships every Fall.  Looking at the past participants, we surmised that a good number of candidates come from universities outside the United States.

 

An online camp where you can collaborate with researchers working in the field, which usually happens from January to June every year. This opportunity is particularly interesting because candidates don’t always need to have background experience in AI/ML research, and people with different backgrounds are encouraged to apply.

 

(Important to note that as of the 2022 cycle: all interns need to have the authorization to work in the US, the UK, Canada, or France.)

Google's Deepmind is one of the largest organizations working on creating Artificial General Intelligence. They offer research internships for students studying towards a Ph.D. and engineering internships for students at any level (bachelor's, master's, doctorate) in a technical subject (computer science, engineering, maths, physics, etc), who have some practical software engineering experience.

 

The AI Alignment fellowship at the Future of Humanity Institute at Oxford University offers an opportunity for anyone from an undergraduate to a postdoc to conduct AI safety research under a mentor for 3-6 months.

 

We also recommend checking the 80,000 Hours job board and the EA Internship Board for any current opportunities.

 

Software Engineering

Especially for undergraduates, doing summer software engineering internships at top tech companies such as Google, Amazon, Facebook, and Microsoft can be a good idea. 

 

These organizations are generally known to offer internships, and work sponsorships for the duration of the internship, to international applicants. Even though these can be highly competitive, they can be a good way to gain career capital and experience in software engineering.

 

Testing your fit

General

In their Career Guide to pursuing AI safety research in Singapore, EA Singapore wrote the following excellent section regarding why testing for fit is important and how to go about it:

“Testing for fit is important, because it’s very difficult to predict what makes you most happy or successful by going with your gut instinct or by introspection. Since career choices are big decisions (where the opportunity cost can be up to several years of your life doing something else more valuable or meaningful), you want to make doubly sure that the choices you make for your career are well thought out with relevant evidence—these evidence can be collected through a series of tests for fit. 

 

Although I do talk about testing for fit in the recommendations below, some of the guides I’ve linked do not give enough explicit details on how to test for fit well. So, it’s helpful to know how to apply a generic process, such as the one below, to test for fit in any kind of career pathway:

 

  1. List your key uncertainties about your top options. This is so you can find out what specifically you’re unsure of that you want to find out later on in the test for fit. You will likely need to update this list as you get more information. 

  2. List down tests to resolve these uncertainties, starting from the least costly ways. Here is a generic list of tests, from the least costly to the most expensive, that you can apply and tweak depending on the career pathway you’re exploring

    1. Conduct a shallow investigation online to get a good overview of the space.

    2. Talk to people who are a few years ahead in the path you’re exploring. You can also attend events or camps related to the career option as a way to meet people.

    3. Find ways to produce a work output relevant to the career path. For example, you can work on a project (either independently or in collaboration with others), join competitions, or apply to be a volunteer organizer for a relevant community.

    4. Apply for an internship.

    5. Apply for a job.

Work through these tests and gather the evidence you need to make a decision on whether to continue working towards your top options. Remember to keep updating your list of uncertainties. 

 

Note that the test for fit process follows two paths in identifying and uncovering uncertainties: from breadth to depth, and from the cheapest to the most expensive activities. It’s designed this way to manage the risk of over-investing your time in a career pathway that is not the right fit for you. A common example of such failures is someone investing large sums of money and time in a graduate school program, but finding out later on that the subject matter is not the right fit. 

 

For a more comprehensive set of decision-making beyond testing for fit, you can take a look at 80,000 Hours’ career decision process.”

 

A more general test for fit could be checking if you enjoyed reading the resources in the "Getting Started" section, or doing one of the internships listed above.

Regarding Ph.D.s

Although a Ph.D. often sounds like an attractive option, it is often extremely difficult and only suitable for a few people. If you are considering getting a Ph.D., these resources might help you decide:

 

Getting a Ph.D.

Once you have decided on getting a Ph.D. and gone over the general Ph.D. advice, read the following resources:

 

(Note that it is also possible to get a Ph.D. in a field not directly related to Artificial Intelligence, such as Statistics, and still work on certain research questions within AI safety. However, this path is less straightforward and harder to chart.)

 

While UAE universities are not yet as established in AI research in terms of output, when compared to top universities that have existed for longer, there are some local options worth considering:

 

The Mohammad Bin Zayed University of Artificial Intelligence is a newly established university focusing solely on artificial intelligence, with faculty from across the globe. They offer fully-funded Masters and Ph.D. programs in fields such as Computer Vision, Machine Learning, and Natural Language Processing.

 

This program requires one year of coursework at NYU’s New York campus, and the remaining years working under a mentor on the Abu Dhabi campus. Students receive a New York University degree upon graduation. 

 

A list of other local universities offering a Ph.D. program can be found here. Note that it is unlikely there will be chances to directly work on AI safety in these universities, but there can be opportunities to work on tangentially relevant fields. For example:

 

This lab focuses on research topics such as robustness, fairness and bias, and explainable machine learning algorithms. AI safety researchers also often work on these problems and they are relevant to AI safety.

 

The SMART Lab tackles research areas such as human-machine interaction and the prediction of human characteristics. They are also interested in bias detection and prevention in intelligent machines.

 

The Data Science and AI Lab is interested in modeling, simulating, and analyzing social phenomena using computational methods. Their research includes work on topics such as AI ethics and human-bot interactions.

 

In terms of university prestige, ranking, and career opportunities, it is a better idea to get a Ph.D. from a top-ranked international institution if possible. However, rankings are not everything, and for other criteria on which to judge possible Ph.D. programs, refer here. Our interviewees also mentioned the importance of researching and looking into particular research groups within universities and finding groups you wanted to work with.  

 

For Emirati nationals, the UAE government offers scholarships for the same. Open Philanthropy also offers scholarships to those looking to research AI safety during their Ph.Ds.

 

More resources and information about doing a CS Ph.D. in the US:
  1. Getting into a CS Grad school in the USA

  2. A Survival Guide to a Ph.D 

  3. CMU Guide to Grad School

 

Alternate Career Paths

Given the UAE’s expected trajectory, an undergraduate/graduate degree in relevant disciplines is sure to be useful in the job market. Beyond working in AI safety organizations, EAs interested in making an impact can explore the idea of earning to give.

 

The UAE’s AI landscape is fast evolving, and so being an AI researcher in the region who doesn’t specifically work on the domain of AI safety but is aware of AI safety issues should also be of benefit in the long run. In this case, it will still be important to be aware of developments in the field, so as to communicate them to those less informed about them. The growth of the field also brings with it opportunities to work in AI Strategy and Policy, which can be another highly impactful career.

 

Lastly, software engineers can also contribute to other non-alignment-focused AI organizations in fields such as Global Health and Poverty; to explore such roles, the 80,000 hours job board is a good resource.

 

Further Reading

 

Key Uncertainties

The lack of readily available data results in several uncertainties:

  • The report discusses the pathway of pursuing an AI degree at the new AI research university - Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). However, as the university has yet to have its first graduated class of students, it is hard to predict the career/job prospects of studying there. 

  • While the UAE is pushing an AI first agenda, and reports predict a significant chunk of the country’s GDP to be coming from AI-related jobs in the near future, it is hard to find concrete data regarding the presence of AI jobs and opportunities in the region today.

Further Research Directions:

The present report is limited in its scope as it aimed more towards being a guide towards AI safety for those in the UAE, and only conducted a preliminary analysis of AI safety as a cause area in the UAE:

  • Due to rapid innovations, and other factors such as the recent establishment of the AI research university, the AI landscape in the UAE is in flux. It will be useful to create a follow-up report looking at, for example, newer AI developments in the UAE and career prospects of those graduating with AI-focused degrees a few years down the line.

  • A principled survey of AI researchers in the UAE regarding AI safety research can help better understand the openness towards safety research in the region, both by the government and academia.

  • There is also the possibility of receiving government funding for activities such as starting an AI safety-focused research lab. However, this will require research and dialogue with relevant individuals such as those on the UAE AI Council.

About EA NYUAD

EA NYUAD is a chapter of the Effective Altruism movement, located in Abu Dhabi, UAE. Our vision is to foster and develop a welcoming, productive, and robust community of global leaders who use evidence and reason to look for the most effective ways of doing good in the world, and who take action on that basis.

 

Acknowledgments

I would like to thank Professors Martin Takáč, Farah Emad Shamout, and Hanan Salam for their input and support for the project. I also thank Alexandrina Dimova, Koki Ajiri, João Bosco de Lucena, Matthew Varona, and Reem Hazim for their feedback on the drafts of this document. I would also like to thank Brian Tan for his guidance, whose local priorities research for EA Phillipinnes this project was inspired by. Any thoughts or opinions expressed in this report are my own and do not necessarily reflect the views of the aforementioned individuals.

References

  1. Examples taken from Robert Miles’ youtube video "Intro to AI safety"

  2. Taken from Human Compatible: AI and the Problem of Control by Stuart Russell

  3. Taken from Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

  4. Taken from Nick Bostrom’s 2014 survey

  5. Definition from The Precipice by Toby Ord

  6. Figures taken from: https://www.un.org/development/desa/en/news/population/world-population-prospects-2017.html

  7. Rough figure taken from: https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/#top

  8. Figure from: https://www.cancerresearchuk.org/funding-for-researchers/facts-and-figures-about-our-research-funding-0

  9. Figure from: https://www.cancer.gov/about-nci/budget#current-year

  10. Rough figure taken from: https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/#top

  11. Objective 8 in the national AI strategy

  12. Refer here: https://80000hours.org/articles/expected-value/

  13. Building on the line of the argumentation in the 80,000 hours article on longtermism

  14. Problems such as aging fall into this category

  15. Future of Humanity Institute’s 2017 survey of authors who published in ICML, NeurIPS

  16. “Artificial Intelligence in Middle East and Africa” report by Microsoft

  17. From UAE’s National AI Strategy

bottom of page