Toggle light / dark theme

AI Future — unbiased, fostering inclusivity & equality | Priya Bhasin | TEDxBoston

I hope this isn’t been posted before especially by me. I do have a bit of pre dementia but it’s not too bad. It’s from my TBI but they’re working on weeding out bias from AI and making it so it’s not bad for us or to us.


Thought-provoking TED Talk on how AI can unintentionally reinforce societal prejudices, perpetuate discrimination, and amplify toxic behaviors online. This talk is a call to action for individuals, tech companies, and policymakers alike. By addressing AI bias and toxicity head-on, we can pave the way for a future where AI systems are truly unbiased, fostering inclusivity and equality for all.

AI, Algorithm, Behavioral Economics, Discrimination, Diversity, Empathy, Engineering, Entrepreneurship, Social Entrepreneurship, Social Media, Software, Voice, Vulnerability, Women, Women in business, Women’s Rights, Work, Workplace, Writing Product leader with over 9+ years of experience in building large scale consumer Products at Yahoo, Apple. Priya is passionate about driving innovation while building dynamic & inclusive teams. During her time at MIT, she built inclusively and won prestigious funding through MIT 100K award (previous finalists include Hubspot, Akamai). She was also invited at TedX Boston and MIT Media Lab to share her work This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

Fueled by new chemistry, algorithm mines fungi for useful molecules

A newly described type of chemistry in fungi is both surprisingly common and likely to involve highly reactive enzymes, two traits that make the genes involved useful signposts pointing to a potential treasure trove of biological compounds with medical and chemical applications.

It was also nearly invisible to scientists until now.

In the last 15 years, the hunt for molecules from living organisms—many with promise as drugs, antimicrobial agents, chemical catalysts and even food additives—has relied on trained to search the DNA of bacteria, fungi and plants for genes that produce enzymes known to drive that result in interesting compounds.

Future AI algorithms have potential to learn like humans, say researchers

Memories can be as tricky to hold onto for machines as they can be for humans. To help understand why artificial agents develop holes in their own cognitive processes, electrical engineers at The Ohio State University have analyzed how much a process called “continual learning” impacts their overall performance.

Continual learning is when a computer is trained to continuously learn a sequence of tasks, using its accumulated knowledge from old tasks to better learn new tasks.

Yet one major hurdle scientists still need to overcome to achieve such heights is learning how to circumvent the machine learning equivalent of memory loss—a process which in AI agents is known as “catastrophic forgetting.” As are trained on one new task after another, they tend to lose the information gained from those previous tasks, an issue that could become problematic as society comes to rely on AI systems more and more, said Ness Shroff, an Ohio Eminent Scholar and professor of computer science and engineering at The Ohio State University.

Here’s what quantum computing is—and how it’s going to impact the future of work, according to a software engineer

The digital devices that we rely on so heavily in our day-to-day and professional lives today—smartphones, tablets, laptops, fitness trackers, etc.—use traditional computational technology. Traditional computers rely on a series of mathematical equations that use electrical impulses to encode information in a binary system of 1s and 0s. This information is transmitted through quantitative measurements called “bits.”

Unlike traditional computing, quantum computing relies on the principles of quantum theory, which address principles of matter and energy on an atomic and subatomic scale. With quantum computing, equations are no longer limited to 1s and 0s, but instead can transmit information in which particles exist in both states, the 1 and the 0, at the same time.

Quantum computing measures electrons or photons. These subatomic particles are known as quantum bits, or ” qubits.” The more qubits are used in a computational exercise, the more exponentially powerful the scope of the computation can be. Quantum computing has the potential to solve equations in a matter of minutes that would take traditional computers tens of thousands of years to work out.

The Seven Evolving Phases of Artificial Intelligence

Artificial Intelligence (AI) has transformed our world at an astounding pace. It’s like a vast ocean, and we’re just beginning to navigate its depths.

To appreciate its complexity, let’s embark on a journey through the seven distinct stages of AI, from its simplest forms to the mind-boggling prospects of superintelligence and singularity.

Picture playing chess against a computer. Every move it makes, every strategy it deploys, is governed by a predefined set of rules, its algorithm. This is the earliest stage of AI — rule-based systems. They are excellent at tasks with clear-cut rules, like diagnosing mechanical issues or processing tax forms. But their capacity to learn or adapt is nonexistent, and their decisions are only as good as the rules they’ve been given.

Artificial intelligence can accurately detect hip fractures on pelvic x-rays

Singapore: A research paper, published in iScience, has decribed the development of a deep learning model for predicting hip fractures on pelvic radiographs (Xrays), even with the presence of metallic implants.

Yet Yen Yan of Changi General Hospital and colleagues at the Duke-NUS Medical School, Singapore, and colleagues developed the AI (artificial intelligence) algorithm using more than fortythousand pelvic radiographs from a single institution. The model demonstrated high specificity and sensitivity when applied to a test set of emergency department (ED) radiographs.

This study approximates the realworld application of a deep learning fracture detection model by including radiographs with suboptimal image quality, other nonhip fractures and meta llic implants, which were excluded from prior published work. The research team also explored the effect of ethnicity on model performance, and the accuracy of visualization algorithm for fracture localization.

Artificial Intelligence Unlocks New Possibilities in Anti-Aging Medicine

A recent paper published in Nature Aging by researchers from Integrated Biosciences, a biotechnology company combining synthetic biology and machine learning.

Machine learning is a subset of artificial intelligence (AI) that deals with the development of algorithms and statistical models that enable computers to learn from data and make predictions or decisions without being explicitly programmed to do so. Machine learning is used to identify patterns in data, classify data into different categories, or make predictions about future events. It can be categorized into three main types of learning: supervised, unsupervised and reinforcement learning.

Software creates entirely new views from existing video

Filmmakers may soon be able to stabilize shaky video, change viewpoints and create freeze-frame, zoom and slow-motion effects – without shooting any new footage – thanks to an algorithm developed by researchers at Cornell University and Google Research.

The software, called DynIBar, synthesizes new views using pixel information from the original video, and even works with moving objects and unstable camerawork. The work is a major advance over previous efforts, which yielded only a few seconds of video, and often rendered moving subjects as blurry or glitchy.

The code for this research effort is freely available, though the project is at an early stage and not yet integrated into commercial video editing tools.

History of Generative AI. Paper explained

Generative AI techniques like ChatGPT, DALL-e and Codex can generate digital content such as images, text, and the code. Recent progress in large-scale AI models has improved generative AI’s ability to understand intent and generate more realistic content. This text summarizes the history of generative models and components, recent advances in AI-generated content for text, images, and across modalities, as well as remaining challenges.

In recent years, Artificial Intelligence Generated Content (AIGC) has gained much attention beyond the computer science community, where the whole society is interested in the various content generation products built by large tech companies. Technically, AIGC refers to, given human instructions which could help teach and guide the model to complete the task, using Generative AI algorithms to form a content that satisfies the instruction. This generation process usually comprises two steps: extracting intent information from human instructions and generating content according to the extracted intentions.

Generative models have a long history of AI, dating to the 1950s. Early models like Hidden Markov Models and Gaussian Mixture Models generated simple data. Generative models saw major improvements in deep learning. In NLP, traditional sentence generation used N-gram language models, but these struggled with long sentences. Recurrent neural networks and Gated Recurrent Units enabled modeling longer dependencies, handling ~200 tokens. In CV, pre-deep learning image generation used hand-designed features with limited complexity and diversity. Generative Adversarial Networks and Variational Autoencoders enabled impressive image generation. Advances in generative models followed different paths but converged with transformers, introduced for NLP in 2017. Transformers dominate many generative models across domains. In NLP, large language models like BERT and GPT use transformers. In CV, Vision Transformers and Swin Transformers combine transformers and visual components for images.

/* */