• AI Now
  • Posts
  • Become a Deepfake Expert Today!

Become a Deepfake Expert Today!

Plus: Fine-tune your resume and LinkedIn profile, restore blurry photos, & more!

Welcome back everyone to AI Now! We apologize for the lack of issues in the last two weeks as our team has been focusing on growing our social media accounts. But don’t worry - we have added more members and are back to normal this week with a full schedule of newsletters.

We are excited to announce that we'll be launching the first free AI tool this week! To help us make sure it meets your needs, please vote for the type of tool you would like to test out. We can't wait to hear your feedback for the beginning of our new initiative! Thank you for your support.

🧠 Become a Deepfake Expert!

Are you curious about the hilarious deepfake oriented news videos that have been popping up on our social media pages recently? We have composed a step-by-step video breaking down exactly how we make them, highlighting the most common errors encountered and how to fix them. Plus, we’ve included some potential use cases for making money with deepfake technology.

But wait – there’s more! To get access to this exclusive video, all you need to do is refer 3 friends or people who might be interested in the content using the link at the bottom of this newsletter. Once you’ve done that, we will send you an email with everything you need to become a deepfake expert. Don’t miss out on this great opportunity!

🔧 Today's Top Tools

  • Fine-tune your resume and LinkedIn profile with help from top recruiters.

  • Click on the image below to get tailored feedback instantly to land more interviews, opportunities and job offers!

  • Ask their AI any questions about finance, investing, budgeting, taxes, and more.

  • Also allows you to uncover important info about professionals who provide financial services and you can browse their advisor database too.

  • Bring your old and blurry photos back to life with restorePhoto's AI-powered image restoration tool.

  • Experience the power of their free software for yourself. Click the image below to get started now!

  • World-class selling skills at your fingertips. AI communication assistant dedicated for outreach.

  • Use Twain for free to see what your sales pitch is missing.

📊 News

🔑 Use Case

In this video, you'll learn about Machine Learning and how it operates within the Artificial Intelligence framework. We’ll take a look at the Machine Learning model, various learning methods and the algorithms employed for different uses. Get ready to dive deep into understanding Machine Learning!

🧠 Learn

Google makes progress on the self-teaching universal translator:

Google is making strides in the development of a self-teaching universal translator. The Universal Speech Models (USMs) are AI systems designed to recognize speech across more than 100 languages, powered by two billion parameters trained on 12 million hours of multilingual data.

The goal of USM: Our long-term goal is to train a universal ASR model that covers all the spoken languages in the world,” Google writes. USMs are Google exploring ” a promising direction where large amounts of unpaired multilingual speech and text data and smaller amounts of transcribed data can contribute to training a single large universal ASR model.”

The key ingredient? The data mix: Much like baking a cake, training predominantly self-supervised models requires the right mix of data. Here, Google uses the following components:

  • Unpaired Audio: 12 million hours of YouTube-based audio covering over 300 languages, and 429k hours of unlabeled speech in 51 languages based on public datasets.

  • Unpaired Text: 28billion sentences spanning over 1140 languages.

  • Paired audio speech recognition data: 90k hours of labeled multilingual data covering 73 languages, plus 10k hours of labeled multi-domain en-US public data, plus 10k labeled multilingual public data covering 102 languages.

What they did and what the results were: To train these models, Google employs unsupervised pre-training and multi-objective supervised pre-training across the various datasets. Tests show that USMs are able to reach or even surpass state-of-the art performance for multilingual ASR models, outperforming OpenAI's Whisper models. Google believes unlabeled data is more practical for building usable ASR than weakly labeled data, which points towards a 'gotta grab em all' when it comes to trawling the web for data. This could have significant implications, as it may be possible to access larger sources of data through government means. By creating these USM systems, Google is on its way to making real progress in translating speech across multiple languages on a global scale.

Reply

or to participate.