Part 2: Sudoku and Cell Extraction

Source

Sudoku Solver AI with OpenCV

We will be creating a Sudoku Solver AI using python and Open CV to read a Sudoku puzzle from an image and solving it. There a lot of methods to achieve this goal. Thus in this series, I have compiled the best methods I could find/research along with some hacks/tricks I learned along the way.

This article is a part of the series Sudoku Solver with OpenCV.
Part 1: Image Processing
Part 2: Sudoku and Cell Extraction
Part 3: Solving the Sudoku

Before and After part 1: left and right respectively

Steps

I am trying to be as detailed as possible in listing the steps along with there descriptions. There are several ways to Extract sudoku and solve it. I will add alternate ways, but won’t go into detail about them.

  1. Import the image
  2. Pre Processing the Image
    2.1 Gaussian blur: We need to gaussian blur the image to reduce noise in thresholding algorithm
    2.2 Thresholding: Segmenting the regions of the image
    2.3 Dilating the image: In cases like noise removal, erosion is followed by dilation.
  3. Sudoku Extraction
    3.1 Find Contours
    3.2 Find Corners: Using Ramer Doughlas Peucker algorithm / approxPolyDP for finding corners
    3.3 Crop and Warp Image: We remove all the elements in the image except the sudoku
    3.4 Extract Cells

3. Sudoku Extraction

3.1 Find Contours:

We will find external contours and then sort by area in descending order. Thus, the largest polygon is stored in contours[0].

findContours: boundaries of shapes having the same intensity
CHAIN_APPROX_SIMPLE — stores only minimal information of points to describe the contour
RETR_EXTERNAL: gives “outer” contours.

for c in ext_contours:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.015 * peri, True)
if len(approx) == 4:
# Here we are looking for the largest 4 sided contour
return approx

Tip: If you want to see the contours, try drawcontour(..)

Big Data Jobs

3.2 Find Corners:

The largest contours is are the corners of the sudoku. There are several ways we can achieve this. Notable methods include Canny Edge Algorithm, Ramer Doughlas Peucker algorithm, Bounding Rectangle, and approxPolyDP.

A. approxPolyDP:
We approximate the curve given by the largest Contours. The largest 4-sided contour is the sudoku.

Function Deatils:
cv2.approxPolyDP(curve, epsilon, closed[, approxCurve])
Curve-> hers is the largest contour
epsilon -> Parameter specifying the approximation accuracy. This is the maximum distance between the original curve and its approximation.
closed -> If true, the approximated curve is closed. Otherwise, it is not closed.
return type: the same type as the input curve

peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.015 * peri, True)
if len(approx) == 4:
# Here we are looking for the largest 4 sided contour
return approx
# approx has the

In corners[0],… stores the points in format [[x y]].

# Extracting the points
corners = [(corner[0][0], corner[0][1]) for corner in corners]
top_r, top_l, bottom_l, bottom_r = corners[0], corners[1], corners[2], corners[3]
return top_l, top_r, bottom_r, bottom_l
# Index 0 - top-right
# 1 - top-left
# 2 - bottom-left
# 3 - bottom-right

B. Ramer Doughlas Peucker algorithm:
We will use max(list, key) for the right side corners as there x-value is greater. Likewise, we will use min(list, key) for the left side corners.

operator is a built-in module providing a set of convenient operators. In two words operator.itemgetter(n) constructs a callable that assumes an iterable object (e.g. list, tuple, set) as input, and fetches the n-th element out of it.

We will use max(list, key) for the right side corners as there x-value is greater. Likewise, we will use min(list, key) for the left side corners.

bottom_right, _ = max(enumerate([pt[0][0] + pt[0][1] for pt in
ext_contours[0]]), key=operator.itemgetter(1))
top_left, _ = min(enumerate([pt[0][0] + pt[0][1] for pt in
ext_contours[0]]), key=operator.itemgetter(1))
bottom_left, _ = min(enumerate([pt[0][0] - pt[0][1] for pt in
ext_contours[0]]), key=operator.itemgetter(1))
top_right, _ = max(enumerate([pt[0][0] - pt[0][1] for pt in
ext_contours[0]]), key=operator.itemgetter(1))

Trending AI Articles:

Natural Language Generation:
The Commercial State of the Art in 2020

This Entire Article Was Written by Open AI’s GPT2

Learning To Classify Images Without Labels

Becoming a Data Scientist, Data Analyst, Financial Analyst and Research Analyst

Corners of the sudoku

3.3 Crop and Warp Image

In order to crop the image, we need to know the dimensions of the sudoku. Although sudoku is a square and has equal dimensions, in order to ensure that we dont crop any part of the image we will calculate the height and width.

width_A = np.sqrt(((bottom_r[0] - bottom_l[0]) ** 2) + ((bottom_r[1] - bottom_l[1]) ** 2))
width_B = np.sqrt(((top_r[0] - top_l[0]) ** 2) + ((top_r[1] - top_l[1]) ** 2))
width = max(int(width_A), int(width_B))

Similarly, we can calculate height

height_A = np.sqrt(((top_r[0] - bottom_r[0]) ** 2) + ((top_r[1] - bottom_r[1]) ** 2))
height_B = np.sqrt(((top_l[0] - bottom_l[0]) ** 2) + ((top_l[1] - bottom_l[1]) ** 2))
height = max(int(height_A), int(height_B))

We need to construct dimensions for the cropped image. Since the index starts from 0, we start from the points are (0, 0) , (width-1,0) ,(width-1, height-1), and (0, height-1). We will then get the grid and warp the image.

dimensions = np.array([[0, 0], [width - 1, 0], [width - 1, height - 1],
[0, height - 1]], dtype="float32")
# Convert to Numpy format
ordered_corners = np.array(ordered_corners, dtype="float32")
# calculate the perspective transform matrix and warp
# the perspective to grab the screen
grid = cv2.getPerspectiveTransform(ordered_corners, dimensions)
return cv2.warpPerspective(image, grid, (width, height))
Cropped Image

3.3 Extract Cells

So we need to process the image again suing adaptive threshold and bitwise_inversion.
Note: Don’t forget to convert the image to grey scale before processing. I made that mistake. The code seemed simple, didn’t make sense if it had any problem but it kept on showing error. It took 3 hrs for me to realize. During this time, I had my Software Engineer moment when we get a error, don’t understand why after trying everything and then when you realize you feel like breaking your laptop.?

# here grid is the cropped image
grid = cv2.cvtColor(grid, cv2.COLOR_BGR2GRAY) # VERY IMPORTANT
# Adaptive thresholding the cropped grid and inverting it
grid = cv2.bitwise_not(cv2.adaptiveThreshold(grid, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 101, 1))
Cropped and Processed image

Extracting every cell/square. Most sudoku are square, but not all of them. For instance, the sudoku used throughout this series is not a square so the cells are not also square. There will extract the celledge_h and college_w using np.shape(grid)

edge_h = np.shape(grid)[0]
edge_w = np.shape(grid)[1]
celledge_h = edge_h // 9
celledge_w = np.shape(grid)[1] // 9

We will iterate through the length and width of the cropped and processed image (grid), extract the cells and store them in a temporary grid.

tempgrid = []
for i in range(celledge_h, edge_h + 1, celledge_h):
for j in range(celledge_w, edge_w + 1, celledge_w):
rows = grid[i - celledge_h:i]
tempgrid.append([rows[k][j - celledge_w:j] for k in range(len(rows))])

Creating an 9×9 array of the images and converting it into a numpy array, so that it is easier to process.

# Creating the 9X9 grid of images
finalgrid = []
for i in range(0, len(tempgrid) - 8, 9):
finalgrid.append(tempgrid[i:i + 9])
# Converting all the cell images to np.array
for i in range(9):
for j in range(9):
finalgrid[i][j] = np.array(finalgrid[i][j])
try:
for i in range(9):
for j in range(9):
os.remove("BoardCells/cell" + str(i) + str(j) + ".jpg")
except:
pass
for i in range(9):
for j in range(9):
cv2.imwrite(str("BoardCells/cell" + str(i) + str(j) + ".jpg"), finalgrid[i][j])
return finalgrid
Extarcted cell stored in finalgrid[2][8]

Next Step

Check out Part 3: Solving the Sudoku to complete your Sudoku Solver AI. Feel free to reach out to me if you have any questions. Check out the code at Sudoku_AI

Before and End of Part 2: Left and right respectively

Resources:

Contours:
https://www.youtube.com/watch?v=FbR9Xr0TVdY
ApproxPloyDP:
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html
Other:
1.https://hackernoon.com/sudoku-solver-w-golang-opencv-3-2-3972ed3baae2

Don’t forget to give us your ? !


Part 2: Sudoku and Cell Extraction was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/sudoku-and-cell-extraction-sudokuai-opencv-38b603066066?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/part-2-sudoku-and-cell-extraction

The Bitter Lesson of Machine Learning

Since that renowned conference at Dartmouth College in 1956, AI research has experienced many crests and troughs of progress through the years. From the many lessons learned during this time, some have needed to be re-learned — repeatedly — and the most important of which has also been the most difficult to accept by many researchers.

Originally from KDnuggets https://ift.tt/2ZsR4sV

source https://365datascience.weebly.com/the-best-data-science-blog-2020/the-bitter-lesson-of-machine-learning

Building a REST API with Tensorflow Serving (Part 1)

Part one of a tutorial to teach you how to build a REST API around functions or saved models created in Tensorflow. With Tensorflow Serving and Docker, defining endpoint URLs and sending HTTP requests is simple.

Originally from KDnuggets https://ift.tt/3eDfS65

source https://365datascience.weebly.com/the-best-data-science-blog-2020/building-a-rest-api-with-tensorflow-serving-part-1

KDnuggets News 20:n27 Jul 15: Great explanation of Calculus the Key to Deep Learning; 8 data-driven reasons to learn Python

We bring you free MIT courses on Calculus, which is the key to understanding Deep Learning – check this amazing explanation of an integral and dx; 8 data-driven reasons to learn Python; How to get and analyze Financial data with Python; Free ebook: The Foundations of Data Science and more.

Originally from KDnuggets https://ift.tt/2CyYTEs

source https://365datascience.weebly.com/the-best-data-science-blog-2020/kdnuggets-news-20n27-jul-15-great-explanation-of-calculus-the-key-to-deep-learning-8-data-driven-reasons-to-learn-python

eBook: Data Integration and the R&D Organization

In this ebook, we’re looking at data integration — the process of combining information from different sources — and why it’s a valuable approach across the enterprise.

Originally from KDnuggets https://ift.tt/2DBeTpW

source https://365datascience.weebly.com/the-best-data-science-blog-2020/ebook-data-integration-and-the-rd-organization

The Future Of Design: ADI Automation Or AI/human Collaboration? [Answered]

Source

The Future of Design: ADI Automation, or AI/Human Collaboration? [Answered]

Complete automation might not be the exact future of design as we all expect it to be. So if not ADI, then what else can change the future of web design?

Source: https://medium.com/@ArnoldoKleider/artificial-intelligence-and-the-future-of-web-design-47000eb7aad4

Till now, you had only two options to get a site designed perfectly.

Option a. Choose and hire a web design agency

Option b. Sign up for one of those “drag-and-drop” web design services.

However, now there is a third option, and that is artificial design intelligence, a.k.a ADI. This nifty little application of AI in designing the web has created much hype, and is already being dubbed as the “future of web design”.

So, is that true? Can automated design really become the future of design? Does ADI really have that kind of power? Let’s find out!

Let’s Try And Understand What ADI Really Is!

Till now, we had AI. in the realm of technology, it was like a unicorn, despite being a pretty well-known concept. The basic functionality of this tech is to take a large amount of data input and inspect them to recognize familiar recurring patterns and make decisions based on that. The more data it analyzes, the more pattern it recognizes, the better decisions it can make.

Source: Photo by Franck V. on Unsplash

ADI emerged as one of the many applications of artificial intelligence in art and design, specific to the web design world. With ADI, the system analyzes the popular design trends and creates a personalized site design based on the requirements stated by the user. The basic idea is to automate the whole design procedure based on the data collected on design laws, trends, and best practices and how they might apply across the different types of websites.

Trending AI Articles:

1. Natural Language Generation:
The Commercial State of the Art in 2020

2. This Entire Article Was Written by Open AI’s GPT2

3. Learning To Classify Images Without Labels

4. Becoming a Data Scientist, Data Analyst, Financial Analyst and Research Analyst

The theory of it all sounds good, but what about the practical application? Well, in the real world, the application of ADI is still on its infantile stage. There are a few platforms, such as WIX ADI or Bookmark, that promise the combination of artificial intelligence and UX design. But so far the number of these platforms is too few to be counted as significant growth of this tech. And then there is the grid, a 2014 AI web design company that flew too close to the sun and dropped out of the sky before even learning to fly properly.

So why exactly is this tech not yet a widespread phenomenon? What’s stopping it?

Design, Automated?

Design is art, and art can not be automated.

It is a popular notion among designers who abhor design automation through AI. and it is understandable. After all, if tomorrow someone came to me and said “your job can be done by an AI and we don’t need you anymore” I’ll hate the AI system. But the fact that artificial intelligence can not fathom design patterns on its own is not just about hating the system.

Source: Photo by Rock’n Roll Monkey on Unsplash

Design is an expression of human emotions. And web design is done not only for business success but to connect with the users out there. A lot of market participants, both on the design spectrum, and the client spectrum have their doubts about the artificial design intelligence for various reasons. Let’s look at a few of those reasons before checking out the solution.

ML Jobs

Understanding Individual Needs

Sure, an ADI system with its AI design thinking is going to be well able to inspect lots of market data and generate a design for an effective shopping site. But will it be able to analyze the individual needs of Jane from suburban Ottawa who is over sixty and still has problems with navigating eCommerce sites?

Source: Photo by Timon Studler on Unsplash

AI deals with large data sets that contain too many variables, too many data points, and too many possibilities for us to understand. However, when it comes to design, having detailed research is necessary. Web design done with the help of expert designers tend to have a more hands-on approach. It’s a process where both the designers and clients get into the muck of data and dig up the necessary details. These details are used to create unique websites that cater to the individual needs of each customer, while also catering to the market as a whole. Maybe that is why artificial design intelligence has yet to get more popular.

Full-Scale Design Process Vs Question-Based Design Generation

Look at the design process of any web design agency based on major cities like New York. what does the design process look like?

It is a heavy operation. It starts with a user persona, ends with wireframes and clickable mock-up creation. And during every stage, the agency will include the client and get approval before moving to the next stage.

Source: Photo by Phillip Larking on Unsplash

Compared to that, site design with ADI platforms is a fairly easy process. Just answer some questions, and voilà, a site with matching AI graphic design is ready in minutes. But does the simplicity of it really measure up to the depth of the previous process?

It actually doesn’t. While the simple method of ADI platforms looks better, it is nothing close to the in-depth design process conducted by a design company. For companies that are looking for real success with the sites, it is going to be infinitely better to spring for an in-depth process rather than the ease and simplicity of artificial design intelligence platforms.

The Purity Of Data And Its Impact

There have been numerous examples of how data can affect the AI system adversely. Bad data makes for bad AI, and this is one of the risks users of ADI have to contend with.

When it comes to data for AI-based design, a lot of things can cause corruption to it. Human bias, readymade solutions that do not work correctly, missing and assuming the wrong patterns, and many more contribute towards the corruption of data, and that leads to bad AI.

While using AI in product design, no business can afford to utilize bad data. The costs will be too great to mitigate. The purity of data creates a huge question mark in the fearless application of ADI and until there is a way to purify the data, this question mark is not going to go away.

So, Is There A Middle Way?

So is there any solution, where we can keep using ADI, but at the same time do not lose the good things about the human design process?

Well, there is. The problem with ADI or its concept is the fact that it is too overreaching. The exponential popularity and possibility have created an impression that it can accomplish anything and everything. And to find the solutions to that as well as find the middle way, we need to let go of that idea and focus on inventing AI tools for designers.

Source: Photo by Brian Wangenheim on Unsplash

AI is not going to replace experts anytime soon. And just like that, artificial design intelligence is not going to dethrone the web designers of the world. So rather than aiming for a complete dismissal of real talents in the field of design, how about designing systems that only help the designers in creating an enhanced design? A system that can enhance the creativity, effectiveness as well as the market understanding of the designer is going to do more for the design future than a completely automated ADI.

Wrapping Up: The Future Of Web Design Is Not Automation, But Cooperation

So here we are after discussing ADI, what it is, how it works, and why it doesn’t work. And after talking about all of this, I can confidently say that ADI can only become the future of design if it focuses on cooperation with the designers rather than complete automation. There are still lots of finer points of AI-based UX design that the system still has to master. So until then, let’s focus on creating a design future where designers and ADIs coexist.

Don’t forget to give us your ? !


The Future Of Design: ADI Automation, Or AI/human Collaboration? [Answered] was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/the-future-of-design-adi-automation-or-ai-human-collaboration-answered-e5429c471f0d?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/the-future-of-design-adi-automation-or-aihuman-collaboration-answered

My Week in AI: Part 4

Photo by Chris Liverani on Unsplash

Welcome to My Week in AI! Each week this blog will have the following parts:

  • What I have done this week in AI
  • An overview of an exciting and emerging piece of AI research

Progress Update

Discovering Metric Learning

I have spent a lot of time this week reading the latest literature on time series forecasting and visual search, as I am working on projects in these two areas.

During my research into visual search, I came across the Pytorch Metric Learning library. Metric learning is automatically building task-specific distance metrics from supervised data instead of using a standard distance metric such as Euclidean distance. This task is especially important for developing a metric to determine image similarity, which is the primary application I was considering. The library allows easy implementation of loss functions and miners with regards to metric learning, such as triplet loss, angular loss, tuple miners and subset batch miners. I believe this is a very convenient library for anyone implementing visual search algorithms.

Trending AI Articles:

1. Natural Language Generation:
The Commercial State of the Art in 2020

2. This Entire Article Was Written by Open AI’s GPT2

3. Learning To Classify Images Without Labels

4. Becoming a Data Scientist, Data Analyst, Financial Analyst and Research Analyst

Emerging Research

Multivariate Time Series Forecasting

The research I will be featuring this week is on time series forecasting. I have been working on time series forecasting for a year now through my work at Blueprint Power, so I try to keep abreast of the latest advancements in this field.

The forecasting of multivariate time series is challenging as it is high dimensional, has spatial-temporal dependency characteristics and each variable depends not only on its own past values but on the values of other variables, too. Du et al. proposed a novel method of forecasting such time series in their paper, ‘Multivariate time series forecasting via attention-based encoder-decoder framework.’¹

Graphical representation of framework architecture¹

The researchers’ proposed framework was made up of a Bi-LSTM encoder, a temporal attention context layer and an LSTM decoder. The attention layer is important because in a typical encoder-decoder structure, the encoder compresses the hidden representation of the time series into a fixed length vector, which means information can be lost. The temporal attention context vectors are created based on a weighted sum of the hidden states of the encoder, and give context on which parts of these hidden states are most useful to the decoder. This allows the decoder to extract the most useful information from the outputs of the encoder.

In experiments on commonly used time series datasets, this proposed framework performed better than vanilla deep learning models such as LSTM and GRU, and also better than other encoder-decoder architectures. For me, the key takeaways from this research are the use of the Bi-LSTM encoder, which the researchers demonstrated had improved performance over an LSTM encoder, and also that the addition of the attention layer improved performance. These are two methods that I will be looking to integrate into my time series forecasting work in the future.

ML Jobs

Join me next week for an update on my week’s work and an overview of a piece of exciting and emerging research. Thanks for reading and I appreciate any comments/feedback/questions.

References

[1] Du, Shengdong, et al. “Multivariate Time Series Forecasting via Attention-Based Encoder–Decoder Framework.” Neurocomputing, vol. 388, 2020, pp. 269–279., doi:10.1016/j.neucom.2019.12.118.

Don’t forget to give us your ? !


My Week in AI: Part 4 was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/my-week-in-ai-part-4-f7dd694dbdc6?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/my-week-in-ai-part-4

Artificial Intelligence Can Pose A Danger To Humanity

Source

Today, our technological progress has reached far off; we have been inventing machines that seem to act like humans. Yes, that’s correct, Artificial Intelligence. Before moving on too far away, let’s get some clarity on what is Artificial Intelligence. In simple terms, Artificial Intelligence has the power to replace Human Intelligence. They are nothing but machines. These machines are powered by intense learning and following the protocols.

This article will throw some light on what are the benefits of AI, and what risks it engenders. It will mainly focus on the security hack, which could be detrimental to humans.

Benefits:

Our great scientists have progressed rapidly right from inventing SIRI to Self-driven cars. Artificial Intelligence is termed as Narrow AI today, with its limited abilities. However, our scientists have a great vision. They are yet to come up with General AI. Narrow AI, with its limitations, might still outperform humans at any task; however, with General AI, it is sure to beat humans at any cognitive task.

Trending AI Articles:

1. Natural Language Generation:
The Commercial State of the Art in 2020

2. This Entire Article Was Written by Open AI’s GPT2

3. Learning To Classify Images Without Labels

4. Becoming a Data Scientist, Data Analyst, Financial Analyst and Research Analyst

Source

Risks:

As said and heard, everything comes with a price. With AI progressing in myriad fields these days, there is a risk of Artificial Intelligence getting hacked. They have been doing this since ever now with the invention of technology. It is only a matter of time before they will cut through these intelligence systems.

It is quite challenging to get hold of these hackers. Because it seems that no matter how efficient we are, they have outgrown our smartness, cybersecurity has some massive problems to deal with when it comes to solving the issues of hack and ensuring security.

To ensure safety, Artificial Intelligence Development companies need to roll up their sleeves and become more proactive. They have advanced security in place to deal with Advanced Persistent Threats (APTs) and other related threats; thus, they become confident than ever, and lay relaxed, thinking everything is safe and sound, only to discover it is not. Such an attitude is a surety that they will get hacked.

Smugness can be detrimental sometimes, and it is the case in this scenario. There have been a few questions raised like, How will these AI Development companies put proper security in place while fostering their businesses? Should they have audit facilities to answer regulatory questions?

Do data scientists guarantee the reliability of AI models? How do developers deliver high-quality software for AI software development? Asking the right set of questions can help to understand the problems more clearly.

To gain more advancements, AI Development companies are looking for investment funds to manage and supervise AI, to grow more into the MLOps and ModelOps that will fit into their existing systems. However, the problem is that these companies are not showing that sincerity to combine a machine learning model into the current production environment, which can help make better business decisions. The same goes for AI security. They are still stuck in complexities to direct their AI management team, thus pushing the problems down the road, leading to more trouble in the future.

ML Jobs

Deeper Problems to Look at:

The AI system is vulnerable to the attack surface, where the data can get easily extracted from the system. Here, the MLOps comes into the picture. These tools can help the AI development companies stop the access into the AI that’s been used by the data science company, not just that; these tools can help with API threats. Simultaneously there are other threats to look for that poses a risk to the security, which are neglected by the AI Development companies.

There is a term called Adversarial AI, which is a technique of deceiving the AI model by injecting wrong data, in turn, disturbing the patterns, thus causing damage to the AI model. This breach caused by such cybercriminals. Once breached, they can reverse the functioning and thus poison the data, and what could happen is beyond our imagination.

Imagine you are driving a self-driven car that gets tricked into reading the stop sign into 60 Mph sign. Now, you think of what can happen; this is a sheer example of data poisoning.

These cybercriminals use another technique wherein they inject such signals and processes which display no effect on the system. Instead, they train these models into thinking that as healthy. Once they get trained with it being normal, the hackers use this technique to carry out further attacks, because these models get trained to believe that this functioning is normal.

However, don’t worry, there are systems in place to manage such problems. Only a few app developers are devoting their time and funds to look into the security aspects.

Adversarial Defence needs to be a top priority. Otherwise, it is like keeping your cars unlocked for inviting thefts.

Light on Shadow AI:

Okay, so now we about the threats that come from outsiders, but what about insider threats.

Many teams are working on AI within the organization and are on the constant race for bringing in more innovation. And if they can’t seek what they are looking for, they produce it or procure it. However, you can’t function or secure when you are unaware.

The use of AI-related tools and services in place used by individuals without having the technical know-how to develop their AI-powered solutions refers to as Shadow AI. As per the study, 40% of funds consumption takes place by IT departments outside their companies. When this takes place, serious security gaps can engender.

So, How do you address these problems?

This can minimize by generating collective awareness to maintain the security in the organization and work wholly as a team. As mentioned earlier, about the MLOps and the ModelOps, which can help pivot the AI functionings, making it smoother to manage and supervise it in the long term. Lastly, having a constant sight on who is using the AI, and thus taking things under control.

Hence, before the situation goes haywire, make sure of placing proper tools in place. Artificial Intelligence can do wonders and can make human life smoother, provided crucial emphasis in the area where security concerns.

Don’t forget to give us your ? !


Artificial Intelligence Can Pose A Danger To Humanity was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/artificial-intelligence-can-pose-a-danger-to-humanity-d26fad9fb6ca?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/artificial-intelligence-can-pose-a-danger-to-humanity

Design a site like this with WordPress.com
Get started