Deconstructing Deepfakes

For our February AI Ethics Twitter Chat, we invited expert guest, Dr. Brandie Nonnecke, Founding Director, Citris Policy Lab at UC Berkeley to discuss “Deconstructing Deepfakes”.

Source

Mia Dand: Dr. Nonnecke, Let’s start off with the basics, what are deepfakes?

Dr. Brandie Nonnecke: “Deepfakes” are deceptive audio or visual media created with AI to depict real people saying or doing things they did not. The term “deepfake” is a portmanteau of “deep learning” (a type of machine learning) & “fake”. Don’t confuse deepfakes w/ “cheap fakes” or “shallow fakes”, which are created w/out AI. The slowed video of @SpeakerPelosi to make her appear drunk is a good example. While cheap/shallow fakes are less complicated to make, they can be more detrimental than deepfakes.

MD: Please explain how deepfakes are created.

BN: Deepfakes are created using sophisticated machine learning techniques called generative adversarial networks (GANs). They draw upon a huge amount of training data and “learn” how to generate content.

MD: What was the original intent behind the creation of deepfakes? Was it malicious or is it an example of good tech gone bad?

BN: Creating realistic synthesized imagery has a decades long history in computer science. In late 2017 a Reddit user named “deepfakes” posted realistic videos that swapped celebrity faces into pornographic videos. The field exploded & today most deepfakes are pornographic. (1/2)

MD: In your op-ed in Wired, you say that the California’s “Anti-Deepfake Law” is too feeble. Can you please break down the key issues you see with it?

BN: Absolutely! Deepfakes pose a serious threat to election integrity. Lawmakers have pushed forward legislation to mitigate harmful effects of this technology in the #2020election. Both California & Texas have passed laws & federal legislation has been proposed. California’s “Anti-Deepfake Law” makes it illegal to spread a malicious deepfake within 60 days of an election. However, 4 significant flaws will likely make it ineffective: timing, misplaced responsibility, burden of proof, & inadequate remedies. Timing: The law only applies to malicious deepfakes spread within 60 days of an election & protects satire & parody. However, it’s unclear how to effectively & inefficiently determine these criteria. Lengthy reviews will enable content to spread like wildfire?

Misplaced Responsibility: The law exempts platforms from having to monitor & stem the spread of deepfakes. This is due to Section 230 of the Communications Decency Act, which provides platforms w/ a liability safeguard against being sued for harmful user-generated content. Burden of Proof: The law only applies to deepfakes posted with “actual malice.” Convincing evidence, which is often difficult to obtain, will be necessary to determine intent. For flagged content, a lengthy review process will likely ensure the content continues to spread. Inadequate Remedies: Malicious deepfakes will be able to spread widely before detection & removal. There is no mechanism established to ensure those who were exposed also receive a notification of the intent & accuracy of the content. Harm from exposure will go unchecked.

Trending AI Articles:

1. Google will beat Apple at its own game with superior AI

2. The AI Job Wars: Episode I

3. Introducing Open Mined: Decentralised AI

4. Artificial Intelligence Conference

MD: What would you like to see in future laws? And what are some other remedies outside of law?

BN: I’d like to see laws put more of the onus to police content on platforms & provide greater clarity on how to efficiently & effectively determine intent. Outside of law, there are some promising technical & governance strategies emerging. Platforms are taking a more proactive role in supporting the development of technical strategies to identify & stop the spread of harmful deepfakes. The Deepfake Detection Challenge led by @aws @facebook @Microsoft @PartnershipAI is one example https://deepfakedetectionchallenge.ai/.

Greater accountability is needed from platforms for their role in building systems that spread malicious deepfakes. We’ve seen some promising moves from @Facebook & @Twitter in the past weeks to ban malicious deepfakes & label suspicious content, which makes me hopeful.

MD: Lastly, how can folks avoid being duped by deepfakes, given that their usage will likely spike during upcoming elections?

BN: Most deepfakes are not sophisticated enough to trick the viewer. Glitchy movements, an off cadence are tell-tale signs that something’s amiss. Carefully review the content for inconsistencies or flaws & search for whether a trusted entity has evaluated the veracity of the content. Everyone has a critical role to play to mitigate the spread of malicious deepfakes.

Take caution in spreading content that you don’t know is verified. Altogether now.. .When in doubt ?‍♀, don’t share it out!? IMO, the *threat* of a deepfake will cause more harm in the #2020election than a deepfake itself. Nefarious actors will cry “deepfake!” on content they don’t like to sow confusion. This is what Danielle Citron & Bobby Chesney call the Liar’s Dividend: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954 When there’s no definitive truth, everything’s a lie.

MD: (Question from Blakeley Payne) Papers on the subject commonly write about preventing or possible abuse of better systems? Or does it not matter because there are large gaps between software by academics vs proprietary software written for, say, a film?

BN: Great question! Academics are making great strides in developing ML tools to detect deepfakes. There is a risk of a deepfakes arms race where the technology to detect doesn’t outpace the development of more convincing deepfakes.

Don’t forget to give us your ? !


Deconstructing Deepfakes was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/deconstructing-deepfakes-63737cfa1efb?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/deconstructing-deepfakes

TensorFlow Dev Summit 2020: Top 10 Tricks for TensorFlow and Google Colab Users

In this piece, we’ll highlight some of the tips and tricks mentioned during this year’s TF summit. Specifically, these tips will help you in getting the best out of Google’s Colab.

Originally from KDnuggets https://ift.tt/39Q29WV

source https://365datascience.weebly.com/the-best-data-science-blog-2020/tensorflow-dev-summit-2020-top-10-tricks-for-tensorflow-and-google-colab-users

New Course! Web Scraping and API Fundamentals in Python

web scraping and api fundamentals course, web scraping, api fundamentals, data extraction, web scraping and api fundamentals course in python

Hi! I’m Nikola Pulev, a Physics graduate from the University of Cambridge turned Data Science practitioner and a course instructor at 365 Data Science… And I have exciting news coming your way – the release of our new course – Web Scraping and API Fundamentals in Python!

In this post, I’ll share with you what makes this course special and what cutting-edge skills it will help you develop. Finally, I’ll talk a bit more about myself and the co-author of the course, Andrew Treadway.

The 365 Data Science Web Scraping and API Fundamentals in Python Course

Andrew and I designed this course to teach you how to extract information from the web like a real pro. In today’s data-driven society, obtaining data fast and efficiently is a key skill to have. So, in this course, we’re showing you the different approaches to do that – extracting info from a webpage or using the public APIs to get structured data.

Who is this course for?

Data collection is an essential part of data science. So, if you are an aspiring data professional, it will be truly beneficial to understand how it’s done. On the other hand, we did our best to make a very practical course. So, even if you don’t plan to pursue a career in data science, you’ll gain enough knowledge to automate some daily routines… Or access interesting data about your hobby you didn’t have a clue about. The course is also a great fit for students looking for a comprehensive, engaging and highly interactive approach.

How is the course structured?

In the first part of the course, you’ll learn how to collect data through APIs (the most popular way to transmit data via the internet).

We’ll show you how to GET data from APIs and how to POST your own data contributions to APIs.

In the second part of the course, we focus on Web Scraping – the best alternative in the absence of APIs.

Here, you’ll learn how to leverage powerful libraries such as ‘Beautiful Soup’ and ‘requests HTML’ to scrape any website out there.

Well, at least the ones we are legally and morally allowed to.

We’ll also discuss common problems in Web Scraping and provide you with solutions or workarounds. In addition, we’ll talk more about scraping content dynamically generated by JavaScript.

Andrew and I wanted to make this Web Scraping and API Fundamentals in Python course as engaging as possible. That said, we hope you’ll enjoy the high-quality animations, additional course materials, quiz questions, handouts, course notes, and notebook files with commented code.

What skills will you acquire?

Once you complete the course, you’ll be able to:

  • Use the fundamentals of Web Scraping
  • Implement APIs into your applications
  • Master working with Beautiful Soup
  • Start using requests-html
  • Create functioning scrapers
  • Scrape JavaScript
  • Familiarize yourself with HTML
  • Get the hang of CSS Selectors
  • Make HTTP requests
  • Understand website cookies
  • Explore scraping content locked behind a log-in system
  • Limit the rate of requests

Sounds exciting, doesn’t it? I could share many more details, but I’ll let you discover them on your own in the Web Scraping and API Fundamentals in Python Course Page.

About the authors

As I mentioned, I am a Natural Sciences graduate from the University of Cambridge, in the UK, with a passion for Mathematics, Physics, and Programming. In fact, I enjoy the subject matter so much I’ve taken part in multiple national and international competitions, where I’ve won numerous awards. I also hold a silver medal from the International Physics Olympiad. Despite my background, I am not doing much Physics nowadays. I found a new calling in data science and creating courses to help anyone advance in the field, regardless of their background. In fact, I am a firm believer that you don’t need high school or university to become proficient in a particular area. All you need is passion and a little guidance.

But I didn’t work on this project alone.

It’s a joint effort with Andrew Treadway. Andrew is an outstanding Senior Data Scientist with a Master’s degree in Computer Science with Machine Learning from the Georgia Institute of Technology. He’s been involved in data-related Python programming for more than 7 years. What’s more, he is also the creator of ‘yahoo fin’ – a popular Python library, designed to extract data from the Yahoo Finance website. To be fair, I was using his library long before I met him. Andrew’s help and input were invaluable to the creation of this course; and it was a complete pleasure to work together on this project!

 

The Web Scraping and API Fundamentals in Python Course is part of the 365 Data Science Program, so current subscribers can access the courses at no extra cost.

To learn more about the 365 Data Science Program curriculum or enroll in the 365 Data Science Program, please visit our Courses page.

Want to explore the curriculum or sign up 12 hours of beginner to advanced video content for free? Click on the button below.

The post New Course! Web Scraping and API Fundamentals in Python appeared first on 365 Data Science.

from 365 Data Science https://ift.tt/2JMoqKI

3 Reasons to Use Random Forest Over a Neural Network: Comparing Machine Learning versus Deep Learning

Both the random forest algorithm and Neural Networks are different techniques that learn differently but can be used in similar domains. Why would you use one over the other?

Originally from KDnuggets https://ift.tt/3aV3jSl

source https://365datascience.weebly.com/the-best-data-science-blog-2020/3-reasons-to-use-random-forest-over-a-neural-network-comparing-machine-learning-versus-deep-learning

KDnuggets News 20:n14 Apr 8: Free Mathematics for Machine Learning eBook; Epidemiology Courses for Data Scientists

Stop Hurting Your Pandas!; Python for data analysis… is it really that simple?!?; Introducing MIDAS: A New Baseline for Anomaly Detection in Graphs; Build an app to generate photorealistic faces using TensorFlow and Streamlit; 5 Ways Data Scientists Can Help Respond to COVID-19 and 5 Actions to Avoid

Originally from KDnuggets https://ift.tt/2Ro0LEJ

source https://365datascience.weebly.com/the-best-data-science-blog-2020/kdnuggets-news-20n14-apr-8-free-mathematics-for-machine-learning-ebook-epidemiology-courses-for-data-scientists

T-Cell Receptor Sequencing using Neural Networks

Helping to eradicate diseases and viruses through the power of T-cells.

The Greatest Threat to Humanity…

What is the greatest threat to humanity? Some might say that it’s a nuclear fallout. Others might say it’s the threat of a large asteroid hitting our planet. In fact, most people believe that the biggest threat to humanity is some out of this world, unimaginable thing that will wipe out humans in one sweep. Well… the greatest threat to humanity is right under our noses.

Nuclear Fallout is Unlikely

The greatest threat to humanity is a pandemic that could ravage around the world that we aren’t prepared for. Don’t believe me? Just look at what has happened in the past historically. The Spanish Flu inflected a quarter of the world’s population and killed upwards of 100 million people. Smallpox killed around 500 million people. Disease is the leading cause of death. Even billionaire philanthropist Bill Gates believes that the next worst thing to happen to humanity will be a virus.

“If anything kills over 10 million people in the next few decades, it’s most likely to be a highly infectious virus rather than a war — not missiles, but microbes”

Most of these viruses attack our immune system and terrorize our bodies from the inside. Fundamentally, we have been building vaccines as the main method of our attack to fight against these viruses. The way these vaccines work is that we inject the proteins of the virus into a person’s immune system to expose the immune system on how to fight against these viruses.

The fact is creating a vaccine is costly and takes a lot of time to create. Instead of having to expose the virus in a controlled matter to our immune system, what if we could probe our immune system to fight off these diseases without the use of a vaccine?

Welcome to T-Cells

T-cells are the immune system’s natural defense against disease. They are cells within the immune system that identify dangerous cells (like cancer or viruses) and deactivate those cells via apoptosis. Here’s how they work:

Every diseased cell has antigens on its surface which are cultivated from the cell hijacking other cells and doing damage to other cells. T-Cells have receptors (TCRs) on the surface of their cell which can bind to certain antigens, which allow the T-cell to kill off the diseased cell. Your body does this all the time with diseases, however, sometimes foreign diseases adapt to your body’s natural TCRs which make it so they can’t bind with the diseased antigens. This becomes problematic as the diseased cell can spread throughout your body and essentially kill you. CAR T-cell therapies solve this issue by engineering specific TCRs to fight these antigens.

AI Jobs

Car T-Cell Therapies

In a CAR T-cell therapy, a patient’s T-cells are extracted from a blood sample and genetic information is injected into the nucleus of the T-Cell to instruct the cell to produce certain T-cell receptors. These new CAR T-cells are then put back into the body to bind with the antigens to kill diseased cells.

When considering the TCR of the new T-Cell, there are a few major parts of the receptor to engineer to make sure that the antigen can bind with the TCR. The major parts include:

The Antigen: Also referred to as the peptide, this protein sequence is what connects the TCR to the MHC (or connector to cell) to allow for apoptosis to occur.

V-Segment: The V-segment or variable-segment is a part of the T-cell receptor and is the most physically outside part. There are many different variants of the v-segment.

J-Segment: The J-segment or joining-segment is what connects the v-segment to the rest of the TCR. There are many different variants of the j-segment.

CDR3: The 3rd complementarity-determining region of the T-cell is also important when fighting diseases. It’s in the form of a protein sequence.

Epitope: The epitope is what joins the antigen to the antibody or the rest of the cell. It’s represented in the form of a sequence of proteins.

Finding the parts of the T-Cell receptor is called T-Cell Receptor Sequencing or TCR sequencing. Currently, finding the various parts of the TCR require lots of research and tests. It can be costly and timely, not ideal when dealing with a new disease. This is where CAR T-cell therapies become inefficient for new diseases. T-cell receptor sequencing is a hard process that equates many variables and is hard to do currently. I introduce a deep learning method to TCR sequencing, using a variety of deep learning methods.

T-Cell Receptor Sequencing with Machine Learning

Given the epitope protein sequence of an antigen, can we predict the V-Segment, J-Segment and CDR3 Protein sequence of the TCR that will bind with the antigen? This is the fundamental question I tried to answer using machine learning. To make this easier, I broke up the problem into three manageable goals. The first goal is to create a model to predict the V-segment, one model to predict the J-segment and one model to predict the CDR3 sequence.

I used the VDJdb database which contains information on epitopes and their corresponding T-cell parts that would trigger apoptosis. It contains over 75,000+ data points which is what the model was trained on. You can find the database here.

Deep Learning Model for V-Segment and J-Segment

The method for predicting the v-segment and j-segment are very similar because they are only a handful of classes that the segments can be and a model can be tuned to predict the class that the segment is. This is a multiclass classification problem.

Input Data Representation

As mentioned, the input of the neural networks are the epitope protein sequence of the antigen. However, the epitope protein sequence is represented as letters in a non-fixed length. This is problematic as neural networks only work exclusively with numerical values in a constant size input matter.

Examples of epitope protein sequences: LLWNGPMAV (Yellow Fever Virus), CPSQEPMSIYVY (Cytomegalovirus), CTPYDINQM (Simian Immunodeficiency Virus)

Luckily, assigning each letter to an id makes this easier. We can map each letter in the protein sequence to a number. For example, the letter “A” becomes 1, the letter “B” becomes 2 and so on.

So the yellow fever virus sequence becomes:

LLWNGPMAV → 12 12 23 14 7 16 13 1 22

This new encoded sequence of numbers has a length of 9. However, some sequences will have a length of 8, 10, 11 or up to 20 proteins. The input for a neural network needs a fixed-sized input, so to achieve this, we can pad every sequence to the max length possible of 20 with 0’s. So for our currently encoded protein sequence of the yellow fever virus, it becomes:

12 12 23 14 7 16 13 1 22 → 12 12 23 14 7 16 13 1 22 0 0 0 0 0 0 0 0 0 0 0

These inputs are fed into our neural network in the form of arrays. So with the input handled, how does the neural network decide which segment is appropriate for the sequence?

Model Architecture

A neural network is just a mathematical function that takes in input “x” and produces an output “y”. It has weights or learnt parameters that alter the x to get the y. There are different methodologies in neural networks that utilize parameters in different ways. These parameters are optimized to produce the best output possible. This is done by calculating a loss function which represents how good the model is. The lower the loss, the better the model. We can use an optimizer (which are calculus functions) to optimize the loss and make it lower. This process is called “machine learning” & “training the model.”

The model has over 30 million parameters that can be optimized. It has standard dense layers that perform the most basic neural network operations on an input yet is very effective.

Output Data

The last dense layer in the neural network has 126 neutrons which represent the 126 classes the V-segment of the T-Cell can be. The output is in the form of a one-hot vector which means every neuron output is 0 except for one of the neurons which is 1. The one neuron’s position that has a value of 1 determines which class the V-segment is.

For the J-segment model, there were 68 neurons that represent the 68 classes the J-segment can be. It’s the same model with just a different number of classes in the last dense layer.

So for example, if we were predicting the V-segment class, if the output of the model is 1 0 0 0…. it means that the V-segment would be the 1st class or the TRBV6–8 variable segment.

Top 4 Most Popular Ai Articles:

1. Turn your Raspberry Pi into homemade Google Home

2. Keras Cheat Sheet: Neural Networks in Python

3. Making a Simple Neural Network

4. Artificial Intelligence Conference

Training / Results

After training both models, similar results were achieved. The loss went down to 3 for the entire dataset yet the validation loss was much worse. The model had a tough time generalizing on new data yet it works in theory. This is because many classes will work for a certain epitope so the loss in practicality is much lower.

Loss for V-segment
Loss for J-segment

After training for 30 epochs each, the model works decently well on new unseen data. I can predict one of the right classes up to 80% of the time.

Sequence 2 Sequence for CDR3 Protein Sequence

Since the CDR3 part of the T-cell is a protein sequence of a non-fixed length, a multiclass approach wouldn’t work. Instead, a new approach that takes in the epitope and produces an output of a dynamic length is needed. A Sequence 2 Sequence model can be used for this. It takes in a sequence (epitope protein sequence) and produces another sequence (cdr3 protein sequence). It incorporates an encoder-decoder model.

Input Data Representation

The way we represent the data for this model’s input is similar to the V-segment and J-segment models. The proteins in a sequence are mapped to a number. “A” is mapped to 1, “B” is mapped to 2 and so on. Yet, where the model’s input differs is that after encoding the proteins to numbers, those numbers are encoded into a one-hot vector. A one-hot vector is an array of 0’s with one number in the array as a 1. The position of the 1 in the array represents the class of that one-hot vector. For example, if the number 1 is encoded into a one-hot vector, the first position of the array would be 1 and the rest of the numbers would be 0. Since there are characters, there are 26 classes in the one-hot vector. Doing this allows for information to be handled easier through this neural network.

Also, at the beginning of the protein sequence and end, there will be a start and end token. These tokens essentially represent what they sound like, they indicate when a sequence is starting or when a sequence is ended. Also, the desired output will also be used during training explicitly within the model. I know, using the output for input? Yes, because it allows the seq2seq model to become more accurate and better.

Model Architecture

Embeddings: First, the input is put through an embedding layer, which essentially converts it into a new vector of values of a fixed length. This is important as we had to pad our input to keep it to a constant length and this process allows us to make the padded areas unimportant. It multiplies the input by some learned weights to create a new vector which are trained during the training process. Embeddings also allow for similar words or sequences to be generalized into similar number sequences, which allows the model to generalize even easier. Similar words are closer to each other.

Embeddings Map

Encoder: The encoder is an LSTM layer that produces the state vectors for the decoder. Essentially what the encoder is doing is that it processes the input and extracts the most important features. LSTMs have multiple state vectors that it aggregates, but essentially what the LSTM keeps track of is the long-term, short-term and produces an output. Propagating our input through this allows for crucial extraction of important features of our input, later to be constructed back into a sequence from the decoder.

Decoder: The decoder is also an LSTM but reconstructs in some sense the output from the decoder. It is able to produce a representation of the new sequence from the encoder features. The decoder outputs are then run through a dense layer to produce the final sequence, which is encoded into numbers. These extremely specific neural network functionalities are what allows for the sequence 2 sequence model to be so effective. The parameters can be easily learnt with enough data.

Training / Results

This model trained very well and achieved magnificent results. There were only around 20 000 parameters (small for a neural network) but it was able to create a protein sequence to 90% accuracy and got down to a 0.4 loss.

No signs of over fitting for the 20 epochs it was run for, which means it’s still got room for improvement! Overall, this works really well.

Demo

Melanoma Antigen
Herpes Virus Antigen
Yellow Fever Virus Antigen

Implications

I envision a future where pandemics, disease and viruses can be dealt with in a secure, hasty manner. We are inevitably going into a more personalized focused medical system that favours those with money over the general public. We as a society need to address large scale disease and make it accessible to everyone, and I believe that CAR T-cell therapy is something we need to be working to advance. Breakthroughs in research can expedite the process of getting it out widely to the market, but we will need full cooperation of world governments to achieve accessibility for everyone. This solution I’ve proposed helps in making CAR T-cell therapy better but not more accessible. As higher accuracy goes up, we need to ensure usability goes up as well.

This project also helps sement the use of NLP technologies in the medical field (which is becoming more reliable). A lot of people don’t trust these A.I to help with medical discovery, but the fact is that it makes our current medical technology better.

Code for project: https://github.com/Vansh-Sethi/Predicting-Properties-of-T-Cells-to-Increase-Immune-Response

Key Takeaways:

  • Problem: Predict the CDR3 protein sequence, variable segment & joining segment of a T-cell given the epitope of an antigen
  • Epitope data can be represent as number through word encodings and embeddings
  • Variable segment & joining segment can be predicted using classical neural networks
  • CDR3 protein sequence can be predicted using a Sequence 2 Sequence model

Thank you for reading! It means a lot that people engage with my content and I can bring some of my own insight. Feel free to contact me via my website. Thank you again!

Don’t forget to give us your ? !


T-Cell Receptor Sequencing using Neural Networks was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/t-cell-receptor-sequencing-using-neural-networks-9199a957f728?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/t-cell-receptor-sequencing-using-neural-networks

How is Artificial Intelligence used in Sales?

Source

Industrialists are using AI as an essential marketing strategy to boost sales. Here are 5 intriguing ways in which Artificial Intelligence is used in Sales. Read these now!

Companies are leveraging the incredible benefits of Artificial Intelligence to advance their businesses. With its strong capabilities, Artificial Intelligence has now plunged into the Sales Industry too! Industrialists are using AI as an essential marketing strategy to boost their sales.

For those enterprises already in the AI fray, top-performing companies said they are more than twice as likely as their peers to be using the technology for marketing (28% vs. 12%),says Adobe.

AI Jobs

No wonder, the use of Artificial Intelligence in sales is bringing a lot of advantages to the table. Advanced AI techniques help businesses operate and communicate in a more intelligent manner. Relevant insights extracted from data analysis help marketing strategists and sales people take the right decisions. This is why,

87% of current AI adopters said they were using or considering using AI for sales forecasting and for improving e-mail marketing, says Statista.

AI techniques help businesses predict leads that are most likely to buy from them, keep a check on the performance of the sales team and optimize prices of various deals. Here in this blog, I have discussed ways of using AI to improve sales. Let’s read these in detail.

Source

Using AI to improve Sales!

Artificial Intelligence techniques help businesses to analyze datasets and draw meaningful insights from data. These insights are then used to make predictions, product recommendations, and important business moves.

Thanks to this incredible potential of AI to augment and improve sales performance. AI can create $1.4 to $2.6 trillion of value in marketing and sales, says a report from Harvard Business Review.

Advanced AI techniques are being leveraged by companies across the globe in order to scale business growth and to increase sales profits. Listed here are 5 intriguing ways in which Artificial Intelligence is used in sales. Let’s read these to know more.

Predictive Forecasting!

Wondering how is Artificial Intelligence used in sales. AI is used for predictive forecasting in sales. With the use of AI techniques, Artificial Intelligence systems can predict or forecast future outcomes based on the analysis of historical data.

Some of the common predictions that sales AI systems can make include:

  • Leads or prospects that are most likely to close.
  • Leads or prospects to be targeted next.
  • New leads that may be interested in what you’re selling.

The accuracy of these predictions is completely dependent on the system that is being used. With the right inputs, the use of Artificial Intelligence in sales is capable of showing the right future prospect for your business.

Top 4 Most Popular Ai Articles:

1. Turn your Raspberry Pi into homemade Google Home

2. Keras Cheat Sheet: Neural Networks in Python

3. Making a Simple Neural Network

4. Artificial Intelligence Conference

While using AI to improve sales is one way how AI proves to be advantageous, Predictive Forecasting is another. Predictive Forecasting creates a great value for the sales team.

By doing a thorough data analysis, AI can help sales managers forecast their team’s performance well in advance which allows them to take proactive measures based on the numbers. That’s the reason why,

37% of organizations have implemented AI in some form. That’s a 270% increase over the last four years, says a press release by Gartner. You can hire an AI developer to incorporate AI techniques for scaling your business growth. Let’s continue further to read more on how Artificial Intelligence is used in sales.

Price Optimization!

Price optimization is a challenging task. But it can be made easier with the use of Artificial Intelligence in sales. Giving an ideal discount to win a deal can be easily done with the use of advanced AI techniques.

This is how using AI improves Sales. By analyzing the data related to the deals happened in the past, an AI algorithm detects and finds out the ideal discount rate of the proposal that would let you win the deal.

These features may include, size of the deal, number of competitors, product specification compliance, company size, client’s industry, client’s annual revenues, public or private company and level of decision-makers.

Thus with the help of advanced algorithms, using AI to improve sales has now become a preferred choice among businesses these days. This is why many sales & marketing companies can be seen investing a lot in technologies such as Artificial Intelligence.

Worldwide spending on Artificial Intelligence systems is forecast to reach $35.8 billion in 2019, an increase of 44% over the amount spent in 2018, says IDC.

Source

Lead Scoring!

If you are still curious to know more on How is Artificial Intelligence used in Sales, then reading further will surely help. A salesperson has to make strategic decisions quite often to know where they actually need to invest time when it comes to closing deals to hit their monthly or quarterly quota.

Most of the time, their decision-making process is based on gut instinct and incomplete information. But with the use of Artificial Intelligence, the algorithm can do an analysis of the client based on past information such as customer interaction history via emails, voicemails, text messages,etc.

Thus, with the help of advanced algorithms, AI can help in finding the leads. This is how using AI to improve sales has now become a preferred option among sales and marketing companies. You can read more about the advantages of AI here in this blog.

Upselling and Cross-Selling!

Companies are using AI to improve sales profits of their business. The most economical way to grow your business revenue is to sell more to your client-base. But, who is going to buy is a question. AI can help you find the most likeable prospects interested in buying your product.

Doing this helps in saving a lot of time and cost from being wasted on marketing to those who won’t buy. All thanks to Artificial Intelligence algorithms. An AI algorithm helps in finding which of your existing clients are interested in buying a better version of what they currently own (up-sell) and/or which are most likely to want a different product (cross-sell).

All this results in increasing the overall revenues and dropping the marketing costs. Hope now you have the answer to this, how is Artificial Intelligence for Sales.

Performance Management!

Sales managers have to access the pipelines so as to find out potential deals. Using advanced algorithms of Artificial Intelligence in sales, sales managers can use dashboards to see which salespeople are likely to hit their quotas. This way managers can keep an eye on the performance of their sales team.

84% of enterprises believe investing in AI will lead to greater competitive advantages, says a report.

Source

The Bottomline

By now, I am sure you have got a clear idea of how is Artificial Intelligence used in Sales. AI algorithms help improve sales performance through predictive forecasting, price optimization, lead scoring, up-selling & cross-selling and performance management for the sales team. Companies are now catching on to this fact. That’s why they are using AI capabilities to grow their top-line revenue.

According to Salesforce’s latest State of Sales report, sales leaders expect their AI adoption to grow faster than any other technology. Salesforce also found that high-performing teams are 4.9X more likely to be using AI than underperforming ones.

Don’t forget to give us your ? !


How is Artificial Intelligence used in Sales? was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/how-is-artificial-intelligence-used-in-sales-7e9c1d6d8a1d?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/how-is-artificial-intelligence-used-in-sales

Artificial Intelligence Will Change the Way Leaders Think About Management

Source

The robot revolution is here, and it’s taken over the manager’s desk — or, at the very least, begun to share the chair.

In recent years, artificial intelligence (AI) tools have begun to edge their way into the day-to-day conventions of work and leadership. In 2019, the Oracle and Future Workplace AI@Work Global Study reported that nearly 50 percent of respondents said that they currently used some form of AI at work — a notable leap from the year before, when only 32 percent said the same. However, more striking are the attitudes that the data revealed; according to the study’s researchers, HR leaders were the most optimistic about AI’s entry into the workplace (36 percent), followed by managers (31 percent).

This revelation might seem counterintuitive at first glance. After all, nuanced interpersonal work doesn’t seem to fall easily into the category of repetitive, data-driven roles that most people associate with AI expansion and automation. Fields such as retail, transportation, advertising, and logistics all seem more at risk for technology-caused job losses. In fact, CNBC recently reported that 42 percent of workers in the business and logistics support sector have “above-average concerns about new technology eliminating their jobs.”

AI Jobs

And yet, AI is just as present in the management profession — if, notably, not as feared. The above Oracle study noted that most employees seemed to view AI and human managers as complementary, rather than competing, presences in the workplace.

Respondents shared that they felt robots were better than human supervisors at “hard” organizational skills such as maintaining work schedules (34 percent), problem-solving (29 percent), and managing a budget. However, they also noted that humans surpassed AI when it came to “soft” empathetic skills like understanding employee feelings (45 percent), supporting a positive workplace culture (29 percent), and evaluating performance (26 percent).

These findings suggest that AI is in a position to support human managers, not replace them. Currently, more than 50 percent of a manager’s time is spent on administrative coordination and control — the very tasks that, as the Oracle study indicates, employees believe are better performed by AI. Conventional managers spend just 30 percent of their time on problem-solving and collaborating, 10 percent on strategy and innovation, and a mere 7 percent on developing their people and engaging with stakeholders.

Given this data, it stands to reason that if managers delegate the brunt of their administrative responsibilities to AI tools, they will have more time to spend on the tasks that they are best-suited to perform. Managers already want AI support; according to data published in a 2016 issue of the Harvard Business Review, 86 percent of surveyed managers mentioned that they would like AI support with their monitoring and reporting responsibilities.

Top 4 Most Popular Ai Articles:

1. Turn your Raspberry Pi into homemade Google Home

2. Keras Cheat Sheet: Neural Networks in Python

3. Making a Simple Neural Network

4. Artificial Intelligence Conference

Manager-to-AI delegation stands to benefit businesses, as well. To quote American Express’s vice president of customer data science and platforms, Anthony Mavromatis, in an article on the matter for WeForum: “By cutting [managers] loose from tasks traditionally expected of them, AI allows managers to focus on forging stronger relationships with their teammates and having a greater impact in their roles.”

Layne Thompson, the director of ERP Services for a U.S. Navy IT organization, made a similar point in an article for the Harvard Business Review. He noted, “More often than not, managers think of what they’re doing as requiring judgment, discretion, experience, and the capacity to improvise, as opposed to simply applying rules. And if one of the potential promises of machine learning is the ability to help make decisions, then we should think of technology as being intended to support rather than replace [managers].”

AI has tremendous promise to help managers — and a few potential pitfalls.

Earlier this month, the Verge ran an article about the dark side of AI-supported management, profiling what can happen when human managers become subordinate to AI organizers rather than partners with them. The piece shared a few anecdotes of management gone wrong, particularly highlighting the case of an Amazon warehouse worker who was driven to injury under AI-directed management processes.

“Management was completely automated,” the worker, speaking under the pseudonym of Jake, told reporters. He explained that managers would patrol the warehouse floor with their laptops open, pushing workers to speed up whenever their AI-powered tracking software indicated a slowdown.

The repetitive, fast-paced work took a toll on Jake’s back, eventually creating a disc injury. One manager told him that bending his knees more when lifting would be easier on his back — but when he did so, another supervisor came by to notify him that his rate had dropped and he needed to speed up. Eventually, the problem became so pronounced that Jake was unable to continue working.

Jake’s story illustrates what can happen when humans are used to reinforce AI-produced hard metrics, rather than work in collaboration with AI to support employees. In Amazon’s case, prioritizing the achievement of what AI viewed as optimal performance led to managers giving up an essential aspect of their usual roles — noticing when employees are struggling and intervening when necessary. As a result, Jake was unable to work, and the company had to pay the turnover costs of replacing a good employee.

AI holds tremendous potential for managers, but it must be applied correctly. AI’s role should always be to support managerial performance — not to set performance metrics and use human managers as enforcers.

When businesses incorporate AI management technology into their workflow, they need to have a clear vision for how it will — and will not — be used. Leaders should emphasize that the technology will only be used to automate time-consuming administrative work and give managers more time to fulfill their interpersonal responsibilities, such as providing team-wide support, developing personnel, and pursuing innovation. Then, they should periodically check in to make sure that those goals are being met and that AI hasn’t — as in the case of Amazon’s Jake — turned toxic to the very people it was intended to help.

There is no doubt that AI can benefit managers, employees, and businesses as a whole in the future. However, business leaders will need to be strategic, and, above all else, demonstrate human empathy as they incorporate the technology into their managerial ranks.

Don’t forget to give us your ? !


Artificial Intelligence Will Change the Way Leaders Think About Management was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/artificial-intelligence-will-change-the-way-leaders-think-about-management-54546079fa14?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/artificial-intelligence-will-change-the-way-leaders-think-about-management

2 Things You Need to Know about Reinforcement Learning Computational Efficiency and Sample Efficiency

Experimenting with different strategies for a reinforcement learning model is crucial to discovering the best approach for your application. However, where you land can have significant impact on your system’s energy consumption that could cause you to think again about the efficiency of your computations.

Originally from KDnuggets https://ift.tt/3aQepbb

source https://365datascience.weebly.com/the-best-data-science-blog-2020/2-things-you-need-to-know-about-reinforcement-learning-computational-efficiency-and-sample-efficiency

Simple Question Answering (QA) Systems That Use Text Similarity Detection in Python

But how exactly are smart algorithms able to engage and communicate with us like humans? The answer lies in Question Answering systems that are built on a foundation of Machine Learning and Natural Language Processing. Let’s build one here.

Originally from KDnuggets https://ift.tt/2V4wdsA

source https://365datascience.weebly.com/the-best-data-science-blog-2020/simple-question-answering-qa-systems-that-use-text-similarity-detection-in-python

Design a site like this with WordPress.com
Get started