Vision Transformers: Natural Language Processing (NLP) Increases Efficiency and Model Generality

Transformers Are for Natural Language Processing (NLP), Right?

There has been no shortage of developments vying for a share of your attention over the last year or so. However, if you regularly follow the state of machine learning research you may recall a loud contender for a share of your mind in OpenAI’s GPT-3 and accompanying business strategy development from the group. GPT-3 is the latest and by far the largest in OpenAI’s general purpose transformer lineage working on models for natural language processing.

Of course, GPT-3 and GPTs may grab headlines, but it belongs to a much larger superfamily of transformer models, including a plethora of variants based on the Bidirectional Encoder Representations from Transformers (BERT) family originally created by Google, as well as other smaller families of models from Facebook and Microsoft. For an expansive but still not exhaustive overview of major NLP transformers, the leading resource is probably the Apache 2.0 licensed Hugging Face (?) library.

Attention Is All You Need

All of the big NLP transformers share a component of intellectual inheritance from the seminal paper “Attention is all You Need” by Vaswani and colleagues in 2017. The paper laid the groundwork for doing away with architectures sporting recurrent connections in favor of long-sequence inputs coupled with attention. In the years since, transformer models, which are characteristically built out of attention module and do not have recurrent connections, have mostly superseded LSTM-based models like ULMFit, although we can probably credit ULMFit with highlighting the importance of transfer learning in NLP. Given the rapid growth in transformer size as models have scaled to millions and billions of parameters with concomitant energy and monetary costs for training, transfer learning is all but required to do anything useful with large NLP transformers.

Big Data Jobs

If you’ve read a few blog posts in your efforts to stay up-to-date with developments in transformer models, you’ll probably notice something peculiar about a common analogy. Transformers are the most visible and impactful application of attention in machine learning and, while transformers have mostly been used in NLP, the biological inspiration for attention is loosely based on the vision systems of animals.

When humans and other animals look at something, each portion of the scene is experienced at a different level of detail; i.e. it’s easy to notice a difference in detail for objects in your peripheral vision compared to the center of your field of view. A number of mechanisms contribute to this effect, and there is even an anatomical basis for visual attention in the fovea centralis, a pit region in the center of the retina with an increased density of photosensitive cone cells responsible for facilitating detailed visual tasks. The human eye also facilitates visual attention by changing the shape of its lens to bring a particular depth into sharp focus like a photographer manipulating depth of field. At the same time, registering images from both eyes on a specific part of an attended object provides peak stereo vision, aka depth perception. These intuitive examples are often used to describe the concept of attention and form a natural fit for image-processing tasks with direct biological analogues.

So why do we hear so little about transformer models applied to computer vision tasks?

Trending AI Articles:

1. Preparing for the Great Reset and The Future of Work in the New Normal

2. Feature Scaling in Machine Learning

3. Understanding Confusion Matrix

4. 8 Myths About AI in the Workplace

Attention in Computer Vision Networks

The use of attention in computer vision networks has followed a now-familiar pattern. Significant work on visual attention in neural networks was underway decades ago, and yes, Geoffrey Hinton was working on attention more than ten years ago (Larochelle and Hinton 2010). Attention didn’t make a huge impact on image processing until more recently however, following the success of NLP transformers.

Visual attention didn’t seem to be necessary for performance leaps in much of the image classification and processing tasks that kicked off the current deep learning renaissance. This includes AlexNet and subsequent champions of the image recognition competition based on the ImageNet dataset. Instead, deep learning researchers and engineers working in computer vision scrambled to collect the arguably lower hanging fruit of increasingly deep convolutional neural networks and other architectural tweaks. Consequently attention remained a niche area in deep vision models while other innovations like residual connections and dropout took hold.

Concurrent with the high profile advances of transformer models for NLP, several image transformer research projects have been quietly underway. The most recent high-profile offerings in the vision transformer space have been 2020 contributions in the form of Vision Transformer from Dosovitsky and colleagues at Google Brain, Image GPT form Chen et al. at OpenAI, and the Visual Transformer from researchers at Facebook. Due in no small part to hard-working PR departments, we’ve come to expect headline advances like these from large commercial labs from the top 10 listings in the S&P 500 (including OpenAI due to their special relationship with Microsoft), but there have been plenty of contributions to attention research for computer vision from smaller labs.

One of the first examples of taking inspiration from the NLP successes following “Attention is all You Need” and applying the lessons learned to image transformers was the eponymous paper from Parmar and colleagues in 2018. Before that, in 2015, a paper from Kelvin Xu et al. under the tutelage of Yoshua Bengio developed deep computer vision models with hard and soft attention in “Show Attend and Tell” to generate image captions. Like NLP models at the time, which were also starting to experiment with attention, Xu et al. used recurrent connections (in the form of an LSTM head) in their caption generation task, and the attention mechanism was a little different from the dot product attention mechanism used by Vaswani et al. that has become prevalent in transformer models.

Shared and distinguishing characteristics for several recent applications of attention to computer vision neural models, culminating in the tokenized transformer models from Dosovitsky et al. and Wu et al. in 2020.

But attention in computer vision predates the current deep learning era. As mentioned earlier, biological attention mechanisms have long been an inspiration for computer vision algorithms, and an interplay between using algorithms to model biological vision systems and using ideas from biological vision systems to build better computer vision systems has driven research for a number of decades. For example, Leavers and Oegmen both published work on the the subject in the early 1990s.

A Concise Review of Attention in Computer Vision

There have been many attempts over the years to capture the fuzzy concept of attention for better machine learning and computer vision algorithms. However, after the introduction of Vaswani’s Transformer model, the field seems to have settled on scaled dot product self-attention as a dominant mechanism, which we’ll review here graphically. For a more detailed mathematical explanation of this and other attention mechanisms see Lillian Weng’s post.

Graphical representation of scaled dot product attention self-attention, currently the most commonly used attention mechanism used in transformer models.

The attention mechanism people most commonly refer to when discussing self-attention is the scaled dot-product mechanism. In this scheme, an input vector is processed by a linear weight matrix (aka dense layer) that produces key, query, and value vectors. Unlike the tensors produced at each layer by typical deep learning formulations (“representations”), the key, query, value trio each have a specialized role to play.

Taking the dot product of the query and key vectors yields a scalar, which is later used to weight the value vector. To maintain bounded magnitudes in the resulting weighted value vectors, the query-key dot products are all scaled according to the dimension of the input and then subjected to a softmax activation function, ensuring the sum of all weights comes to 1.0. The weighted value vectors are finally summed together and passed to the next layer.

There are some variations across transformers in how the input vectors are embedded/tokenized and how attention modules are put together, but the mechanism described above is truly “all you need” as a building block to put together a transformer.

Attention mechanisms for various computer vision models employing attention. Multiple competing attention mechanisms prior to Vaswani et al. 2017 have largely given way to the now-dominant scaled dot-product mechanism used in the archetypal Transformer.

Previous implementations of attention in computer vision models have followed more directly from inspirational examples from biology. Larochelle and Hinton’s foveal fixation and Mnih et al.’s differentiable spatial fovea mechanism are both inspired by the fovea centralis of many vertebrate eyes, for example. With the present dominance of transformer models in NLP and subsequent application of the sequence transformer idea to computer vision, that means that unlike early attention mechanisms image/vision transformers are now typically trained on unwrapped 1-dimensional representations of 2-dimensional images. Interestingly enough this enables the use of nearly the same models to be used for visual tasks, like image generation and classification, as are used for sequence-based tasks, like natural language processing.

Should Vision Transformers Be Used for Everything?

In a series of experiments beginning in the 1980s (reviewed in the New York Times here and by Newton and Sur here) , Mriganka Sur and colleagues began to project visual sensory inputs into other parts of the brains of ferrets, most notably the auditory thalamus. As the name suggests, the auditory thalamus and associated cortex is normally associated with processing sounds. A fundamental question in neuroscience is what proportion of brain capability and modular organization is hard-wired by genetic programs, and what proportion is amenable to influences during postnatal development. We’ve seen some of these ideas of “instinctual” modular organization applied to artificial neural networks as well, see “Weight Agnostic Neural Networks” from Gaier and Ha 2019, for example.

With a variety of evidence including trained behavioral distinction of visual cues and electrophysiological recordings in the auditory cortex of neural responses to patterned visual inputs, Sur’s group and collaborators showed that visual inputs could be processed in alternate brain regions like the auditory cortex, making a strong case for developmental plasticity and a general-purpose neural learning substrate.

Why bring up an ethically challenging series of neural ablation experiments in newborn mammals? The idea of a universal learning substrate is a very attractive concept in machine learning, amounting to the polar opposite of the expert systems of “Good Old-Fashioned Artificial Intelligence.” If we can find a basic architecture that is capable of learning any task on any type of input data and couple that with a developmental learning algorithm capable of adjusting the model for both efficiency and efficacy, we’ll be left with an artificial general learner if not a full-blown artificial general intelligence to try and understand.

The image transformers described in this article are perhaps one step in that direction. For the most part (and in particular for the work from 2020 published by OpenAI, Google, and Facebook on their respective image transformer projects) researchers tried to deviate as little as possible from the same models used for NLP. Most notably that meant incorporating little in the way of an intrinsic embodiment of the 2-dimensional nature of the input data. Instead the various image transformers had to learn how to parse images that had been converted to either partially or completely unraveled 1-dimensional sequences.

Shortcomings of Image Transformers

There were shortcomings as well. OpenAI’s Image GPT reportedly has 2 to 3 times more parameters than comparably performing convolutional neural network models. One advantage of the now-standard convolutional architectures used for images is that it’s easy to apply convolution kernels trained on smaller images to images of almost arbitrarily higher resolution (in terms of pixel dimensions, not object scale) at inference time. The parameters remain the same for images that are 64 by 64 as they do for 1920 by 1080 pixels, and it doesn’t really matter where an object appears in an image because convolution is translationally equivariant.

Image GPT, on the other hand, is strongly constrained by memory and compute requirements that come from using dense transformer layers applied directly to pixel values. Consequently, Image GPT sports high computational costs compared to convolutional models with similar performance.

Visual and Vision Transformer from Facebook and Google, respectively, seem to avoid many of the challenges experienced by Image GPT by doing away with pixel values in favor of tokenized vector embeddings, often also invoking some sort of chunking or locality to break up the image. The result is that even though Vision Transformer from Google requires much larger datasets to achieve state-of-the-art results than similar conv-nets, it’s reportedly 2 to 4 times more efficient at inference time.

You Won’t Be Converting Datasets Just Yet…

There is probably too much biological evidence that supports the benefits of sensory-computational networks evolved explicitly to take advantage of 2D data to throw those ideas away, so we don’t recommend converting all datasets to 1D sequences any time soon.

The resemblance of learned convolutional kernels to experimentally observed receptive fields in the biological vision is too good to ignore. We won’t know for a while whether the generality of transformers constitutes a step on the best path to artificial general intelligence, or more of a misleading meander — personally we still have reservations about the scale of computation, energy, and data required to get these models to perform well — but they will at least remain very relevant commercially and warrant careful consideration with regard to AI safety for the foreseeable future.

Don’t forget to give us your ? !


Vision Transformers: Natural Language Processing (NLP) Increases Efficiency and Model Generality was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/vision-transformers-natural-language-processing-nlp-increases-efficiency-and-model-generality-ce0da0d3d9a4?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/vision-transformers-natural-language-processing-nlp-increases-efficiency-and-model-generality

License Plate Recognition (All you need to know)

This article provides an overview of LPR systems. The description starts ‘technical’ then proceeds to the ‘market’ view.

Via https://becominghuman.ai/license-plate-recognition-all-you-need-to-know-303c85059e15?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/license-plate-recognition-all-you-need-to-know

Data Engineering the Cousin of Data Science is Troublesome

A Data Scientist must be a jack of many, many trades. Especially when working in broader teams, understanding the roles of others, such as data engineering, can help you validate progress and be aware of potential pitfalls. So, how can you convince your analysts to realize the importance of expanding their toolkit? Examples from real life often provide great insight.

Originally from KDnuggets https://ift.tt/3sQN8yL

source https://365datascience.weebly.com/the-best-data-science-blog-2020/data-engineering-the-cousin-of-data-science-is-troublesome

Cloud Computing Data Science and ML Trends in 20202022: The battle of giants

Kaggle’s survey of ‘State of Data Science and Machine Learning 2020’ covers a lot of diverse topics. In this post, we are going to look at the popularity of cloud computing platforms and products among the data science and ML professionals participated in the survey.

Originally from KDnuggets https://ift.tt/3iCUUrs

source https://365datascience.weebly.com/the-best-data-science-blog-2020/cloud-computing-data-science-and-ml-trends-in-20202022-the-battle-of-giants

How to Use MLOps for an Effective AI Strategy

The need to deal with the challenges and other smaller nuances of deploying machine learning models has given rise to the relatively new concept of MLOps. – a set of best practices aimed at automating the ML lifecycle, bringing together the ML system development and ML system operations.

Originally from KDnuggets https://ift.tt/3qJpjaa

source https://365datascience.weebly.com/the-best-data-science-blog-2020/how-to-use-mlops-for-an-effective-ai-strategy

Top 5 Artificial Intelligence (AI) Trends for 2021

What is Artificial Intelligence (AI)?

There are many sources which give similar answers to the question, “What is AI?”. By the 1950’s, there were many scientists, mathematicians and philosophers that were looking into the concept of Artificial Intelligence. One such person was Alan Turing, who to this day is considered by many to be the Father of Artificial Intelligence.

He formed the idea and mathematical and logical reasoning behind the concept of machine intelligence wherein machines and computers would be able to replicate the behavior of humans and their intelligence. His paper Computing Machinery and Intelligence outlines his logic for the start of artificial intelligence. Fast forward 70 years into the future and we are now in a world where computers are able to converse with humans, albeit with limitations, but this is the progress we see as our world progresses to a more sophisticated AI.

Some definitions of AI include:

“The design and development of computer systems that have the knowledge and skills required to perform the tasks which usually require human intelligence to undertake”AILab

Big Data Jobs

“The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”Britannica

“In computer science, the term artificial intelligence (AI) refers to any human-like intelligence exhibited by a computer, robot, or other machine. In popular usage, artificial intelligence refers to the ability of a computer or machine to mimic the capabilities of the human mind — learning from examples and experience, recognizing objects, understanding and responding to language, making decisions, solving problems — and combining these and other capabilities to perform functions a human might perform, such as greeting a hotel guest or driving a car.” –IBM

Currently, What is the Most Advanced AI?

In 2020 we saw two big achievements in the race towards true artificial intelligence. OpenAI introduced GPT-3, and Google’s DeepMind released AlphaFold 2. Both of these organizations are racing to research, create and advance the role of artificial intelligence within our society. There are many others making advancements in AI, but GPT-3 and AlphaFold 2 were two of the most notable in 2020.

Human-Computer Interaction is now further along thanks to GPT-3, which can allow us to converse with computers regarding more specialized topics. For AlphaFold 2, 2020 was the year in which the world was engulfed in the COVID-19 pandemic and hundreds of pharmaceutical giants were on a mission to find a vaccine. AlphaFold 2 could very well be what helps scientists get there faster, and to help them understand diseases more quickly on a molecular level which will help to save future lives.

Trending AI Articles:

1. Preparing for the Great Reset and The Future of Work in the New Normal

2. Feature Scaling in Machine Learning

3. Understanding Confusion Matrix

4. 8 Myths About AI in the Workplace

Language Processing with GPT-3

With GPT-3, scientists were able to train an AI model to converse with humans, and to read and write texts. This astounding development was GPT-3 (aka, Generative Pre-trained Transformer 3) developed by OpenAI. For years people have been fascinated with talking to humanoid robots in their native language, and believe this to be a critical milestone to reach with AI. GPT-3 can process texts in many languages better than its predecessor GPT-2, thanks to its model having 175 billion parameters (the values that a neural network tries to optimize during training), compared with GPT-2’s now meager 1.5 billion.

A 50 Year Challenge Broken with AlphaFold 2

Scientists from Google’s DeepMind were able to create AlphaFold 2 which has been hyped to be one of the biggest breakthroughs in the field of medical science and biology. The model can detect and derive the 3D protein structures of amino acids which could potentially increase the rate at which humans can understand diseases and increase the rate of pharmaceutical manufacturing. Never before in the last century has it been more important for the field of medicine. AI is perfect for assisting in the medical industry: modeling proteins on a molecular level; comparing medical images and finding patterns or anomalies faster than a human; and countless other opportunities to advance drug discovery and clinical processes. Scientists can spend days, months and even years trying to understand the DNA of a new disease, but can now save time with an assist from AI. Breakthroughs like AlphaFold 2 need to continue for us to advance our understanding in a world filled with so much we have yet to understand.

What AI Trends Will We See In 2021?

Many of these are a continuation from previous years and are being tackled on many sides by many people, companies, universities and other research institutions.

The following trends are what we are likely to see in 2021:

Voice and Language Driven AI

Source

In 2020, we saw economies grind to a halt and businesses and schools shut down. Businesses had to adopt a remote working structure in a matter of days or weeks to cope with the rapid spread of the COVID-19 pandemic. This has created a new focus on voice and language driven AI to reduce the amount of touch based technology.

AI and Cloud Adoption

Source

AI and the cloud go together in today’s technological ventures like peas and carrots. Digital assistants like Apple Siri, Google Home and Amazon Alexa have penetrated every aspect of our lives, from industries to communities and even our homes. Tasks such as ordering online, using a household fixture/appliance, making an appointment, listening to music, asking a question, and even communicating with someone over text or calling them directly can now be done using digital assistants that were created using artificial intelligence methods and cloud resources.

For businesses, cloud computing has offered, and continues to offer the ability to scale operations in an effective and efficient manner. Computing resources can be replicated with a click of a button to scale up or down as needed. Extra memory and faster processors can be added quickly; large amounts of data (i.e., gigabytes, petabytes and even higher) can be used in a single database by simply provisioning more memory; new software can be tested and used throughout an organization more efficiently; and so much more.

Cloud services such as IBM Cloud Computing, Amazon Web Services, Google Cloud, and Microsoft Azure all provide pre-trained and ready to use machine learning, deep learning and other artificial intelligence models, algorithms and services for businesses to use in their data analytics process. This gives even small businesses the ability to access powerful models that have been trained on millions and even billions of rows of data at a fraction of the cost. This way you can start off on a cloud based service until an on-premise based AI workstation, server or cluster makes more financial sense and allows you to keep your data under your full control. 2021 should find even more AI startups use cloud based services to get off the ground quickly so they can instead focus their financial capital on other essential business factors.

AI & Martech

Source

“Martech” is the combination of marketing and technology to achieve marketing goals and objectives. Marketing could be thought of as the data center of an organization in the past since it was this department’s job to collect, organize and translate data to internal stakeholders about their customers. Naturally as technology became more advanced and ingrained into society, it was an easy marriage to take marketing to the next level.

Today, recommender systems, digital marketing, conversational AI/chatbots are all prevalent on websites that offer a service for consumption. Wearable devices, IoT, sensor technology, Internet and website tracking cookies, and more help companies to collect vast amounts of data from everyday consumers which can then be used to understand consumer behavior better and to create new products and services. As privacy concerns continue to pick up steam, companies will be looking to find new avenues to pursue their marketing goals so they can continue to track consumer behavior.

AI & Healthcare

Source

2020 was the year the world saw the worst pandemic since the Spanish flu over a century ago. The healthcare industry was overwhelmed (and still is) with medical professionals at risk of being infected, overworked and fatigued. With an overwhelmed healthcare system, it means patients with other illnesses and diseases that require emergency services cannot receive the treatment they need. Using AI, hospitals and healthcare systems will be looking to automate certain tasks, such as triage and diagnosing patients, or evaluate medical records of their patients in order to best assess high risk individuals or those who may have something that was missed by previous office visits. This can limit exposure to disease, give priority care to those who need it most, and flag anomalies that can lead to better disease prevention, among other things.

As previously mentioned, Google’s DeepMind created AlphaFold 2 which can recreate the 3D DNA structure of amino acids. This allows scientists to understand diseases more quickly and then rapidly begin the process of making new pharmaceuticals.

Radiologists and other medical professionals have already been using AI to help scan X-rays and MRIs to help find diseases and other problems. 2021 should find them leaning on AI more as accuracy rates continue to rise above what humans can see.

AI & Cybersecurity

Source

Cybersecurity has been in the spotlight the past few years. There’ve been many public reports of hackers infiltrating large companies and stealing sensitive customer and insider information. These attacks will only continue to rise in 2021, including Ransomware that can lock a computer until you pay the hacker.

Using artificial intelligence, algorithms can learn the ways of its user in order to decipher a pattern of behavior and normality. Once suspicious behavior is detected, it could either alert us or prevent the attacker from going further. This can be applied to a company or an individual user at home. People are now starting to adopt smart homes in which they can control daily tasks in their home using a digital assistant. Training AI algorithms to learn their user’s behavior can help prevent hackers from illegally gaining access to a person’s home. Using home devices is convenient, but can also leave a person vulnerable to cyber-attacks, which is where AI can assist in mitigating such risks. We will definitely need to lean on AI to help keep hackers at bay, as they rapidly adopt to new security techniques.

Why Has Artificial Intelligence (AI) Gained Popularity Recently?

A combination of factors has increased the need for quickly evolving AI. These include our own fast paced lives, digital assistants such as Google Home and Amazon Alexa, remote working opportunities, a larger focus on long term health, and an abundance of data and information being available. Our society is quickly learning how to fit AI into every corner of our daily existence.

Many people now have wearable devices, most notably Fitbit’s and Apple Watches. Both are now used by individuals to measure their health status. An example like this shows that people are either looking or are willing to adopt new forms of technology that will bring more convenience into their lives, and better understand themselves and the world around them.

For companies, the creation of new powerful supercomputers and partnerships with research organizations create synergies that can lead to incredible innovations and inventions. Microsoft announced that they will partner with OpenAI on new initiatives using Microsoft supercomputers to train incredible large AI models with the infrastructure available on their Azure platform. Collaborations like these only make the path to discovering new forms of artificial intelligence more viable.

For individual consumers, having digital assistants such as Siri, Alexa and Google Home, make life more convenient for them. The initial motivations for technological advancement has always been to make life easier and more convenient. As we rely more on AI devices, we become more dependent and more open to new ways of making our lives even easier. The demand is more likely to increase as AI adoption grows and companies can prove to consumers that their AI technology can enrich their lives (hello robo-taxis and self-driving cars!).

Artificial Intelligence Research Is Ongoing

Artificial intelligence is a concept that continues to reach more people, and continues to evolve thanks to the army of researchers, scientists, engineers and entrepreneurs devoted to advancing the field and bringing it to the masses.

2021 looks like it will be yet another year of experimentation and potential breakthroughs that will build upon the works of previous years, and new innovations may surprise us as new challenges continue to pop up.

OpenAI has given us the biggest leap in Natural Language Processing in 2020. However, this AI model required an enormous amount of compute resources. Microsoft now plans to help OpenAI with a collaboration to use Microsoft Supercomputers to build even more powerful and robust AI models for business and consumers. There will most likely be more emphasis on AI to also help us optimize and reduce power consumption of these data hungry machines.

Google’s DeepMind, AI for Good by Microsoft, Facebook AI, Intel’s University Research & Collaboration Office (URC), NVIDIA AI and OpenAI are just a few of the best known companies and organizations that are driving AI research. They have no qualms about forming collaborations, educating the next generation and finding the best and brightest minds to supplement that work. Partnering with institutes, universities, and companies from across the globe allows AI research to advance rapidly as the best minds help solve problems relating to health, poverty, education, environment, and everything else that touches our lives each day.

2021 will truly be a year to watch closely.

Don’t forget to give us your ? !


Top 5 Artificial Intelligence (AI) Trends for 2021 was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/top-5-artificial-intelligence-ai-trends-for-2021-a3075fea6658?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/top-5-artificial-intelligence-ai-trends-for-20213980022

Going Beyond the Repo: GitHub for Career Growth in AI & Machine Learning

Many online tools and platforms exist to help you establish a clear and persuasive online profile for potential employers to review. Have you considered how your go-to online code repository could also help you land your next job?

Originally from KDnuggets https://ift.tt/3iz1Kht

source https://365datascience.weebly.com/the-best-data-science-blog-2020/going-beyond-the-repo-github-for-career-growth-in-ai-machine-learning

Travel to faster trusted decisions in the cloud

Join technology experts, partners and analysts in the industry to see what is taking off in AI, cloud computing and putting models into production for better outcomes and trusted results. Register today!

Originally from KDnuggets https://ift.tt/39S3hLB

source https://365datascience.weebly.com/the-best-data-science-blog-2020/travel-to-faster-trusted-decisions-in-the-cloud

Mastering TensorFlow Variables in 5 Easy Steps

Learn how to use TensorFlow Variables, their differences from plain Tensor objects, and when they are preferred over these Tensor objects | Deep Learning with TensorFlow 2.x.

Originally from KDnuggets https://ift.tt/3itMRgq

source https://365datascience.weebly.com/the-best-data-science-blog-2020/mastering-tensorflow-variables-in-5-easy-steps

Design a site like this with WordPress.com
Get started