THE MO CENTRE

Innovative Research on Tech and AI

About Us

Photo by Pavel Danilyuk on <a href="https://www.pexels.com/photo/a-bearded-man-playing-chess-8438922/" rel="nofollow">Pexels.com</a>

  • HOW HUMAN INTELLIGENCE CAN INFLUENCE SUCCESSFUL ARTIFICIAL INTELLIGENCE

    Introduction

    The questions awaiting answers are whether humans are in control or are Artificial intelligence and machines in charge? Can human intelligence still lead the way in a world increasingly and highly driven by AI and technology? Is AI really the future? There is this exciting and sometimes complicated relationship between human intelligence and AI and how it can lead us to success.

    Humans have long been trying to make machines that can think like us. But what if we could use our intelligence to lead machines, and guide them to success? Recent research suggests that human intelligence and artificial intelligence can work together to create a powerful and successful society.

    AI has come a long way in recent years. What began as simple rule-based systems has become much more complex. AI now has the capability to learn and adapt to the world around it. But, in order to really make AI successful, the intelligence of humans is essential.

    AI and human interaction

    AI is able to learn from humans and use their expertise to create more advanced solutions. Humans can also use AI to better understand the world around them and make more informed decisions.

    Humans have the ability to understand the world in ways that AI can’t. We have the capacity to think abstractly and to recognize patterns. Humans also provide AI with the right data set and guidance to help it reach its full potential.

    AI has been growing rapidly over the past decade and is now a part of our everyday lives. But, how is human intelligence guiding this growth? AI is driven by data, and humans are the ones who decide how, when and what data is used to create AI. Humans are also the ones who decide which tasks AI should be used for, and how it should be used. This means that human intelligence is essential for the development of AI and the success of its applications.

    This human-AI Interaction is desirable as AI can take on simple tasks, freeing up humans to focus on more complex tasks. AI can also help with decision-making, providing valuable insights that can help humans make better decisions.

    The impact of human-AI interaction is felt in many areas. AI is able to assist in medical diagnosis, provide insights into financial markets, and develop more efficient transportation systems. In addition, AI helps us to better understand our environment and the impact of our actions.

    The Benefits of Human-AI Interaction

    The benefits of human-AI interaction are clear. Together, humans and AI can create powerful solutions to many of the world’s most challenging problems, from climate change to poverty, human rights, insecurity and sustainable development. By combining our intelligence with that of AI, we can create a brighter and more prosperous future.

    AI can also be used to augment human expertise. By combining AI and human intelligence, we can create solutions that are more powerful and effective than either one alone.

    Humans have been able to use their knowledge and experience to guide AI in the right direction. AI is able to learn from humans and use their expertise to create more advanced solutions. Humans can also use AI to better understand the world around them and make more informed decisions. By combining human intelligence and AI, we can create powerful solutions that can help us solve complex problems.

    The Future of Human-AI Interaction

    As AI technology continues to evolve, the potential for human-AI interaction is enormous. In the future, AI will become even more powerful and able to take on more complex tasks. Humans will be essential in guiding AI to greater success and helping it reach its full potential.

    However, it is important to remember that AI is only as good as the data it is given. If the data is inaccurate or incomplete, the results of the AI will be compromised. This is why it is essential to ensure that the data used to create AI is accurate and complete. Human intelligence can help us make sure that the data is correct and can help us improve the accuracy of AI.

    Conclusion

    It’s clear that human intelligence is essential for the success of AI. We have the ability to provide AI with the guidance and data it needs to reach its full potential. Human intelligence and AI are both essential components of a successful AI system. By leveraging their strengths, we can create powerful solutions that can lead to success. This intelligence entails creating an ethical framework, guardrails and ensuring a responsible and human centered AI as this willl guide AI and create a better future for us all.

  • HOW AI RESEARCH, DEVELOPMENT AND DEPLOYMENT CAN SHAPE THE FUTURE OF AI AND BECOME CORE DELIVERABLES
    Image by The Mo Centre

    Research has played a very pivotal role in growth and development of every sector in the society spanning particularly in the area of technology. It has led to advanced learning and findings which has contributed in continuous knowledge acquisition and improvement on future technologies. The field of Artificial intelligence isn’t left out and has more especially been boosted by research.

    As technology advances, so must governments and tech companies work together to foster AI transformation. With artificial intelligence (AI) rapidly becoming an integral part of our lives, governments and tech companies must collaborate to ensure its development is done safely and ethically

    The future of AI is here, and it’s being shaped by research, development, and deployment. AI is drastically changing the way we think, the way we work, and the way we live. Get an in-depth look at how AI is reshaping the world with this carousel of statistics and research!

    AI is one of the fastest growing fields in tech. According to a recent report by PwC, AI is expected to contribute $15.7 trillion to the global economy by 2030!

    Research is a key part of the AI equation. A number of organizations are investing heavily in AI research, including Google, Microsoft, and IBM. In 2018, Google invested over $30 billion in AI research and development! AI deployment is critical in order for AI to reach its full potential. According to the same PwC report, AI deployment is expected to create 58 million new jobs by 2022.

    AI is increasingly being used in a variety of industries, from healthcare to manufacturing. According to McKinsey & Company, AI has the potential to create $13 trillion in additional global economic output by 2030. AI is also being used to automate and streamline processes. AI-based automation is expected to reduce labor costs by up to 25%, according to the World Economic Forum.

    In 2019, the European Union (EU) launched a European Artificial Intelligence Alliance to bring together stakeholders from across the AI industry and support the development of responsible AI. The Alliance has a €20 million fund to finance AI projects, focusing on areas such as digital transformation, health and robotics.

    In the United States, the National Science Foundation has invested $50 million in AI research and development. The funding has been used to support a range of projects, including those that explore the ethical issues of using AI in healthcare.

    The United Nations has also taken a proactive approach to AI, with the organization working with governments, civil society and the private sector to create a framework for responsible AI development. The UN’s efforts are focused on ensuring that AI is used to benefit humanity, rather than as a tool to oppress.

    Tech giants such as Google and Microsoft have also been quick to embrace AI. Google recently launched its AI for Social Good initiative, which is aimed at helping to solve global challenges using AI. Microsoft has also invested heavily in AI, with its AI for Good initiative focusing on helping to solve issues in healthcare, education and the environment.

    The World Economic Forum has also played an important role in fostering AI development. Its AI for Good program is focused on creating ethical, responsible and equitable AI solutions that can be used to benefit the world. The initiative is backed by major tech companies, including Microsoft, Google, IBM and Apple.

    AI is also being used for social good in other ways. AI for Earth is a Microsoft-backed initiative that uses AI to help monitor and protect the environment. The program focuses on areas such as water conservation, biodiversity and climate change.

    The impact of AI is already being felt in the world. According to a recent report by The Economist, AI-based automation is expected to increase global productivity by up to 40% by 2035. From research to development to deployment, AI is changing the world in ways we never imagined. It is crucial for government, tech agar cues and organizations to get actively involved in the future of AI today through research, development, and deployment as this is capable of shaping the future of AI and become core deliverables.

    CONCLUSION

    The future of AI is much hinged on the level of research and advancement made in its development and deployment. The core deliverable here isn’t just creating AI systems but ensuring that are ethically tested, responsibly developed and deployed for human use devoid of risks and challenges. It is also clear that the synergy in research and development between governments and tech companies will foster an enabling environment that will ensure that AI is used responsibly and for the benefit of humanity. With collaboration and innovation, we can foster the development of AI transformation and create a better future for the universe.

  • THE MO CENTRE’S RHETORICS ON TRUSTWORTHY AI
    Image by The Mo centre

    It is without doubt that AI remains the rave of the moment. The industrial revolution epitomized by rapid technological advancement in the 21st century revolution(industry 4.0) has led to more deeper insights on the future of technology particularly AI. The Fourth Industrial Revolution represents a fundamental change in the way we live, work and relate to one another. It is a new chapter in human development, enabled by extraordinary technology advances commensurate with those of the first, second and third industrial revolutions.

    Advancement into the fifth Industrial Revolution incorporates concepts such as “sustainability,” “human-centeredness,” and “concern for the environment” in addition to transformation of the industrial structure through the utilization of AI, IoT, big data, etc., which was debated in the Fourth Industrial Revolution.

    Even as the world seeks to combine the wave of innovation alongside a well regulatory framework questions still remain that needs answers in truly ascertaining the future of AI. Where is the place of ethics and Trustworthiness in these AI systems?

    1.AI is the double edged; it is the Frankenstein monster. How do we create trustworthy AI when AI companies are at stiff competition with each other. From Google, Bing, NVIDIA, Apple, Samsung, Microsoft, IBM, OpenAI, Salesforce, Amazon, Uipath, Meta, Tesla, SentinelOne, Aurora, C3AI etc.

    2.How do we create AI regulation when countries of the world are at technical war eg trade war economic sanctions and diplomatic rows as well as the race for world power. US v China, West vs East

    3.How do we create trustworthy AI when profit has become the currency for most industrialized AI companies?

    4.How do we create trustworthy AI when there is no international cooperation and agreement on what is morally and ethically right or wrong?

    5.How do we create trustworthy AI when most AI companies have huge financial power and hood the key in innovations and technology expansion thereby creating an unequal AI market?

    6.How do we create trustworthy AI when sanctions in lost proposed AI regulations can easily be paid by these AI companies who make billions of dollars in profit and can easily pay fines imposed while continuing unethical and irresponsible AI practices?

    7.How do we create trustworthy AI when at end users are at the merry or mercy of algorithms created by AI companies and the value chain is from top to bottom rather than bottom to top?

    8.How do we create trustworthy AI when most AI companies are domiciled in the global North and owned by the big 5 countries?

    9.How do we create trustworthy AI in the age of planetary disruptions, nuclear war heads and autonomous weapons wrecking havoc on humanity and occasioning irreparable human rights violations?

    10.How do we create a trustworthy AI when the place of the global south is not in the conversation of global AI?

    Conclusion

    Though there is global consensus on creating guardrails for AI while not stifling innovations however the crux of this is that until a trustworthy AI is enthroned the future of AI cannot be guaranteed. Creating trustworthy AI entails that these questions are answered and adopted in ongoing regulations. An abstract regulation frame-worked by panelists will not solve this challenge of having a trustworthy AI. The algorithms that power AI reflect those who create them hence the need for inclusiveness in AI conversation. According to Gartner, about 30% of AI-based digital products would need a trustworthy AI framework by 2025 . Moreover, 86% of the users reported their intention to place their trust and loyalty in companies that adopt ethical AI principles. These examples reaffirm the critical need to develop trustworthy AI. When problems get designed and solved more from underrepresented regions trust in AI will grow. For AI systems to be trustworthy, they need to be responsive to a multiplicity of criteria that are of value to interested parties and end users. If we are to ensure trustworthy AI then it must be acceptable and trusted by the people it seeks to serve.

  • The Mo Centre’s Open Letter to the European Union institutions on the European Union Artificial Intelligence Act.
    Image by The Mo Centre

    The discussions on the proposed adoption of a European Union Artificial Intelligence Act (AI Act) has elicited a lot of concerns. On 8 December 2023, 70 civil society groups and 34 expert individuals sent an urgent letter to the Council of EU Member States, the European Commission and the European Parliament to urge them “Do not trade away our rights!” in the final trilogue (negotiation) on the landmark Artificial Intelligence (AI) Act.

    The European Union’s ambition with this proposal, which is an opportunity to further strengthen the protection of peoples rights. As one of the first major regulatory attempts regarding AI, the AI Act should not only have an impact within the European Union and its Member States but would influence other regulatory frameworks on AI around the world.

    While it is true that the United Nations and the European Union share same commitment to respecting, protecting and promoting human rights, international human rights law needs to be the guiding compass at a time when AI applications are becoming increasingly capable and are being deployed across all sectors, affecting the lives of everyone. By firmly grounding new rules for the use of AI in human rights, the European Union would be able to strengthen human rights protection in the face of ever-increasing use of AI applications in our everyday lives.

    The centre has deemed it worthy of giving a human rights-based analysis of the subject matter as well as relevant recommendations on the way forward in the EU AI Act. These are analyzed under five (5) headings discussed seriatim.

    1. High Risk classifications

    – The determination of risks in the EU Act should relate to the actual or foreseeable adverse impacts of an AI application on human rights and not be exclusively technical or safety-oriented.

    – The EU Act must ensure that AI systems that carry significant risks for the enjoyment of human rights should be considered high-risk, with all associated obligations for their providers and users.

    – Companies should not be allowed to self-determine whether their AI system would not be in the high-risk category, and hence opt out of the more stringent requirements for high-risk classes. – Such a model of self-assessment of risk would introduce considerable legal uncertainty, undercut enforcement and accountability, and thereby eventually risk undermining the core benefits of the AI Act.

    2. Stringent limits to the use of biometric surveillance and individualized crime prediction

    – The Centre supports a ban on the use of biometric recognition tools and other systems that process the biometric data of people to categorize them based on the color of their skin, gender, or other protected characteristics.

    – The Centre supports bans on AI systems that seek to infer people’s emotions, individualized crime prediction tools, and untargeted scraping tools to build or expand facial recognition databases.

    – The Centre agrees that such tools entail dangerous accuracy issues, often due to a lack of scientific grounding, and are deeply intrusive. They threaten to systematically undermine human rights, in particular due process and judicial guarantees.

    3. Fundamental rights impact assessments

    – The Centre expresses its strong support for the European Parliament’s proposal for comprehensive fundamental rights impact assessments (FRIA). This is is ideal in every sense and would help Technology and AI systems in advancing human rights rather than impugning same.

    – A meaningful FRIA should cover the entire AI life-cycle, be based on clear parameters about the assessment of the impact of AI on fundamental rights; transparency about the results of the impact assessments; participation of affected people; and involvement of independent public authorities in the impact assessment.

    4. Technical standards

    – The role of standard-setting organizations as envisaged in the drafts of the AI Act is complex. There need to be a more unambiguous and precise role for standard setting organizations.

    5. Holistic approach to AI harms in all areas

    – The exemption from the AI Act by the Council of Europe for AI systems that are developed or used for national security purposes, as well as exceptions from the AI Act for law enforcement and border control would exempt fields of application where AI is widely used, where the need for safeguards is particularly urgent, and where there is evidence that the existing use of AI systems disproportionately targets individuals in already marginalized communities.

    – Exemption and exception of these areas would create a substantial and extremely concerning gap in human rights protection under the AI Act.

    Conclusion

    A holistic approach to ensuring that human rights tenets are protected under the new EU AI Act is not only desirous but extremely crucial. In the wake of human rights violations worldwide we cannot afford to enthrone an era where AI is granted unlimited capability against human rights as this will amount to annexing human rights at the whims and caprice of AI systems. The EU must not only lead by example in creating an AI Act but must do so in protecting human rights for later countries and continents to imitate and adopt.

  • 5 Tips for Tech Companies in creating Responsible and Ethical Tech and AI for Social Good, Sustainable Development and Collaboration

    As the world continues to grapple with the innovative dynamism of AI, more loopholes are spotlighted and while these have received National, regional and international attention more is beyond desired to ensure these inadequacies are avoided in future and tomorrow’s Tech and AI systems. Sustainable development cannot occur if tech is not aligned to social good and questions will need to be answered on the place of collaboration in achieving this.

    The current challenge before Tech and AI companies is to ensure their innovations and AI systems do not get entangled in unethical imbroglios associated with several AI systems. Achieving responsible and ethical AI hence has become a yardstick to navigate the murky waters. To create transparency and trust in AI applications and systems certain fears in AI systems must be tackled by Tech and AI companies.

    Responsible AI is an approach to developing and deploying artificial intelligence (AI) from both an ethical and legal point of view. The goal of responsible AI is to employ AI in a safe, trustworthy and ethical fashion. Using AI responsibly will undoubtedly increase transparency and help reduce issues such as AI bias which has become a re-occurring challenge in AI systems.

    Responsible AI is a framework of principles for ethically developing and deploying AI safely, ethically and in compliance with growing AI regulations. It’s composed of five core principles: fairness, transparency, accountability, privacy and safety. Tech and AI companies can adopt the following 5 guidelines to ensure responsible and ethical AI development and deployment.

    I. Transparency and Accountability

    Transparency and accountability are some of the hallmark of a responsible tech company. Tech companies must prioritize transparency in AI systems to ensure clear understanding of how decisions are made. Implement accountability mechanisms to address biases and unintended consequences as this would help foster trust among users and stakeholders in the industry.

    II. Inclusive Design Practices

    The need to adopt inclusive design principles to address diverse user needs and avoid unintentional discrimination is integral to responsible tech and AI development. Tech companies must prioritize considering a variety of perspectives during the development process to create solutions that benefit a broad range of individuals and communities. This is more so as the algorithms to be utilized in building AI systems must reflect the consciousness of communities it intends to serve. A multi stakeholder approach will also enable diverse and rich opinion and inputs in enabling tech for social good devoid of ethical issues.

    III. Data Privacy Protection

    Data is an integral element in AI development hence the need to ensure high standards are adopted in data handling and storage. Tech and AI companies must prioritize robust data privacy measures to safeguard sensitive information. They must implement anonymization techniques, secure data storage, and informed consent practices to respect user privacy and maintain ethical standards.

    IV. Stakeholder Engagement and Collaboration

    Tech and AI companies must foster collaboration among various stakeholders, including communities, NGOs, and experts, to ensure a holistic approach to problem-solving and that tech and AI innovations should advance social good. They must engage in open dialogues to gather diverse insights and co-create solutions that align with social and sustainable goals. A collaborative engagement will position tech and AI companies in place of advantage of access to more insightful information and ideas.

    V. Continuous Monitoring and Adaptation

    Ai systems must not be built in finality rather there is need to be established mechanisms for continuous monitoring of AI systems to identify and rectify biases, ethical concerns, and unintended consequences. Tech and AI companies must embrace an adaptive approach, incorporating feedback loops and iterative improvements to address evolving challenges and maintain alignment with ethical standards. The medium for oversight of AI systems will help monitor irregularities and also allow for amendments to unseen challenges and shortcomings.

    Conclusion

    The future of tech and AI will be determined by the level of adherence to responsible and ethical guidelines by AI companies and how encompassing these regulations are when weighed with the dynamic nature of tech and AI innovations. The current ethical challenges of AI systems has created a tripod for a more nuanced responsible and ethical framework and tech and AI companies must adopt these to avoid pitfalls and create a more trustworthy and people centered tech and AI systems for social good and sustainable development.

  • FAILURE OF AFRICAN TECH STARTUPS AND INPACT ON FUTURE FUNDING OPPORTUNITIES FOR UPCOMING STARTUPS

    Africa is home to some of the most innovative and dynamic tech startups in the world. Yet despite this, recent reports have shown that many of these startups have failed. But why is this the case? What does this mean for the future of African tech startups? It’s no secret that startups all around the world face many struggles. But what about tech startups in Africa? and what is the way forward for long-term success and investor confidence?

    African start ups face numerous challenges in within the African startup climate but despite the struggles, African tech startups have found creative ways to make the most of their resources. Many have developed strategies to reduce their infrastructure costs, such as using cloud computing and accessing remote storage. The have also found ways to attract investors, such as through crowdfunding platforms or incubator programs.

    In the past few years, the African tech startup scene has been growing rapidly. From fintech to e-commerce, edu tech, legal tech, medical tech, Agrotech etc the opportunities for entrepreneurs to make an impact are immense. The success of African tech startups has had a major impact on the continent. The growth of these companies has been linked to an increase in job creation, economic growth, and improved access to services, such as healthcare and education.

    Despite the promise of success, however, many of these startups have recently failed. According to a report by the African Development Bank, the failure rate of African tech startups is alarmingly high, with nearly 80% of startups failing in their first year. This is in stark contrast to the global average of 50%. It’s also estimated that approximately 90% of African startups fail within the first 5 years.

    There are several factors that contribute to the high failure rate of African tech startups. Poor management, inadequate funding, and lack of access to markets are all major causes of failure. In many African countries, there is a lack of reliable internet and electricity, making it difficult to launch companies that rely on the internet or store large amounts of data.

    In April 2023, Nigerian crypto and web3 company, Lazerpay shutdown its operations after its inability to raise additional funding. Dash a Ghanaian start up folded after raising $86 million in seed funding. It was reported that’s there was a shortfall of at least $25 million unaccounted for with a revenue burn rate of $500,0000 per month and no revenue.

    Kenyan logistics startup Sendy that enabled retailers to purchase FMCGs directly from manufacturers, among other services shut down its operations despite obtaining a $26 million funding. Also, 54gene a Nigerian biotech and health start up raised $45 million in three funding rounds to achieve the extraordinary for African genomics. However, the company shut down in a cloud of controversy within 5 years of operation.

    It was rumored that Payday founder was paying himself $15,000 monthly, before the company packed up. Another founder was paying himself $50,000 monthly, leading to the company packing up. Both of them, on investor funds. This financial recklessness has gained weight due to the leverage of insecurity and absence of monitoring mechanisms of seed funds given to startups. These founders have been given unlimited powers to use funds at will without adequate framework in place to guide the use of these funds.

    The failure of African tech startups has serious implications for the future of the industry. With investors wary of investing in African startups, access to funding is becoming increasingly difficult. This has led to a decrease in the number of new startups being created. At the end of the first six months of 2023, VC funding in the African Tech ecosystem declined by 48% compared to the previous year.

    Unethical practices, lack of due diligence and the lure for instant personal gratification has become the bane of most African startups. The absence of a unified legal framework to guide tech start ups in Africa as well as the lack of punitive measures for fraudulent misuse of funds by Founders has also contributed to this increasingly closing of start ups after huge funding as the perpetrators often go unpunished without reparations of funds misused.

    There is, however, hope for the African tech startup scene. By focusing on better management, increased access to funding and markets, and improved access to resources, entrepreneurs can increase their chances of success. There is need for a robust innovation friendly approach framework to boost investor’s confidence and curtail financial abuse by founders.

    Governments can also play a role in improving the success rate of African tech startups. By providing tax incentives and other forms of support to startups, governments can create an environment that is conducive to success. The African tech ecosystem must be regulated to improve innovation yet reduce and tackle fraud and unethical practices. The era of Founders apportioning to themselves huge salaries at the expense of their start ups should be discouraged by the introduction of legal framework for start ups which will outline the working mechanisms and financial ethics for tech start ups as well as the need for financial prudence, transparency and accountability

    Conclusion

    Despite the challenges, African tech startups still have a chance to succeed. By focusing on the right strategies and taking advantage of the available resources, these startups can succeed and create a positive impact on the African economy. The potential of African tech startups is undeniable. With the right framework in place, these companies can become major players in the global market. They can create jobs, spur economic growth, and improve access to services within and outside Africa. It is time for the African start up ecosystem to learn from the challenges of failed start ups and lean towards making Africa a profitable and financially prudent start up destination.

Design a site like this with WordPress.com
Get started