Does AI Raises Security and Ethics Concerns amid Pandemic

Currently, artificial intelligence and big data are seen as part of the working life in different fields of major enterprises. AI is a groundbreaking technology that has transformed how people utilize computers and perform a business at an accelerated rate of AI growth successes in unmanned air vehicles, chess and poker, digital customer care and analysis systems — are demonstrated.

The appearance of smart computers willing to overcome complex issues in respects that only human beings have been unaware of before. The AI system has developed and strengthened since the day of its discovery. AI is commonly employed in many spheres, due to its capacity to read. Aircraft networks, voice synthesis, computer training, and machine vision are basic implementations of AI.

A quick advance button on medical intelligence (AI) software was pushed by the coronavirus outbreak, and the struggle against both the virus further implemented the technology. Medical imagery AI has taken an important role in support scanning COVID-19 and big data will help monitor the origins of infectious diseases and boost image reader efficiencies based on image processing and other technology. As the pandemic has rounded the globe, creative AI technologies have grown in several places.

Big Data Jobs

Medical AI decision-making processes will also help patients receive better treatment diagnoses

The use of AI technologies in different medical areas is normally referred to in Medical AI. The study says that AI and the industry are one of the first and most significant fields in which large, complicated, and specialized heterogeneous data is integrated. To maximize the processing speed of medical issues, AI may easily use knowledge.

In imports of cold-chain food safety practices, China’s technology firms’ Big Data Shadow,

They partner with local councils to promote the monitoring and safeguarding of manufactured cold chain produce. The method involves the recording of cold food stock, the key feature of the cold chain, the inflowing and outflowing of cold chain food and the manufacturing and selling of manufactured cold chain foods.

Themes such as the growth and Java web development of technology after the pandemic, the interaction between AI and ethics and whether algorithms are going to ‘govern’ people’s lives in the scientific area have been heated.

Trending AI Articles:

1. Preparing for the Great Reset and The Future of Work in the New Normal

2. Feature Scaling in Machine Learning

3. Understanding Confusion Matrix

4. 8 Myths About AI in the Workplace

In the future let’s look at eagerness based on two factors.

Firstly, data now stream, still AI’s lifeblood. Kaggle is the host of the Covid-19 Free Analysis Dataset, a machine learning and data science website. As is established, Covid-19 compiles related data in one central platform and adds new analysis. The latest data collection is accessible by the machine, which allows it simple to interpret for AI learning purposes. More than 128,000 research papers are written on Covid-19, SARS,

  1. Coronaviruses and so on.
  2. Secondly, worldwide medical science and computational scientists today work on these issues laser-focused. It is expected the target of Covid 19 to be up to 200 million medical professionals, researchers, nurses, technologists and engineers. They conduct tens of thousands of tests and exchange knowledge “that we have never seen before with transparency and speed. The study challenge undertaken at Kaggle, Covid-19, seeks to include a large variety of insights into the pandemic, such as its natural background, transfer evidence and diagnostic requirements for the virus and lessons learned from earlier observational studies that enable health organizations in the global community to be better educated and decide on data. On 16 March the challenge came up. In five days, over 500,000 views have been received and over 18,000 downloads have been generated. A new coronavirus can be monitored, mapped, detected and cut off in this world until it was unleashed.

It’s not that far removed as it might sound. We shall soon step into an age of completely autonomous AI when medical technology and computer science merge ever more when we would expect citizens to select wearable, biosensor and intelligent home detectors so that they are kept protected and updated. And with wearable devices and other Internet-of-Things gadgets rising data quality and diversity, a virtuous loop of changes can continue.

Don’t forget to give us your ? !


Does AI Raises Security and Ethics Concerns amid Pandemic was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/does-ai-raises-security-and-ethics-concerns-amid-pandemic-d25ae445596f?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/does-ai-raises-security-and-ethics-concerns-amid-pandemic

Implementation of Pandas and Tensorflow: Classification of IBM employee attrition

Classifying employees as likely-to-quit using Tensorflow, Pandas & IBM attrition dataset

Classification is one of the major topics in machine learning. Some classification problems might not even have numbers to do analysis on.

In this article, I will be classifying IBM employee attrition using a neural network from Tensorflow. First, the model will be built with 80% employees as training data sets, and later with the model, 20% of employees will be tested based on their information and the probability of their attrition from their job will be predicted by the same trained model.

About Datasets:

IBM HR Analytics Employee Attrition & Performance

The problem has 8 non-number variables (categorical variables) like marital status, job role, education field, etc. of the IBM employee. The dataset has 35 attributes and 1470 rows.

Big Data Jobs

Some common problems in classification are:

  • Email Spam
  • Speech Recognition
  • Gestures Recognition
  • Digit Recognition
  • And the list goes on.

The classification type problem requires labeled datasets. So, sometimes solving a problem might involve collecting a large amount of data and labeling them.

For the problem, I have imported the following libraries from python.

For solving the employee attrition problem, I directly did not jump into the neural network. First, I binned the employee monthly salary and found the ratio of employee attrition to not employee attrition.

Trending AI Articles:

1. Preparing for the Great Reset and The Future of Work in the New Normal

2. Feature Scaling in Machine Learning

3. Understanding Confusion Matrix

4. 8 Myths About AI in the Workplace

And in the plot, we can see that the employee attrition is very higher for low-income salary than for high-income salary.

Now, Let’s jump into the implementation of TensorFlow. First I am changing attrition Yes and No to 0 and 1 respectively and splitting the datasets into two sets: Training sets with 80% data and test sets with 20% data.

Then I am removing the attrition data from the training data frame and change them to the dictionary object in python where keys are column names and values are the Keras.input objects.

Now, we concatenate the numeric inputs together and run them through the normalization layer.

Then we keep the all_numeric_inputs to a list to concatenate later. Also, the column names are mapped to the integer which will be used as indices. In any confusion, each of the following functions can be searched online for in-depth understanding. The main motive here is to change the strings to the floating numbers so, analysis can be performed.

Running train_preprocessing (last line of above code) will give the first array of the arrays of the float data type.

Then, we can build a model on top of this and model can be fitted with the train_features_dict dictionaries as x and train_labels as y.

This gave me 88 percent of accuracy. And I further compared each value of x = data_model.predict(test_features_dict) with test_labels. I found most inaccurate values are close to almost the right prediction. So, it gives room for improving this model with better prediction.

Don’t forget to give us your ? !


Implementation of Pandas and Tensorflow: Classification of IBM employee attrition was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/implementation-of-pandas-and-tensorflow-classification-of-ibm-employee-attrition-80764647bccc?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/implementation-of-pandas-and-tensorflow-classification-of-ibm-employee-attrition

6 Major AI Use Cases In IT Operations

Data volumes generated by IT infrastructure are increasing two-to-three fold every year. Gartner

When Woolworths, Australia’s biggest supermarket chain, suffered a nationwide IT outage, the company was forced to shut checkouts, have shoppers leave their groceries and close stores. The 30-minute outage resulted in millions of dollars of lost revenue for the company.

Similarly when the German unit of British-based telecommunications company — Vodafone, faced a 3-hour outage due to failure of control systems, more than 100,000 users got cut off.

The fact of the matter is that IT teams today need to constantly analyze an unprecedented amount of data and use multiple tools that monitor data. This is resulting in extended delays to identify and solve issues.

Big Data Jobs

Moreover, a single outage can trigger thousands of alerts, logs, and events. In a complex IT infrastructure consisting of several siloed apps and databases, and characterized by an ever-increasing number of IT services and servers, heavy reliance on manual processes to identify the root cause of the problem can severely hamper the functioning of business operations.

In addition, ITOps teams usually work in disconnected silos, making it even more difficult to ensure the most urgent incident at any particular time is prioritized and addressed.

That’s where businesses are turning to AIOps to resolve high impact IT operations problems. Gartner coined the term AIOps in 2016, defining it as software systems that combine big data and artificial intelligence (AI) or machine learning functionality to enhance and partially replace a broad range of IT operations processes and tasks, including availability and performance monitoring, event correlation and analysis, IT service management and automation.

Trending AI Articles:

1. Preparing for the Great Reset and The Future of Work in the New Normal

2. Feature Scaling in Machine Learning

3. Understanding Confusion Matrix

4. 8 Myths About AI in the Workplace

AIOps leverages machine learning, big data, and analytics to accomplish the following –

  • Bringing together the heaps of IT data generated by thousands of siloed apps, systems, and performance-monitoring tools
  • Grouping co-related events and sifting out the significant alerts
  • Diagnosing problems in real-time, escalating to IT for remediation, or automatically resolving them without human intervention
  • Predicting issues before they affect business operations

AI Use Cases In IT Operations

Use Case #1: Predictive IT Maintenance

As important it is to diagnose IT issues once detected, what is equally essential is the power to proactively predict future incidents and automate fixes before they impact business operations.

Owing to the complex, dynamic nature of today’s IT environments, legacy performance-monitoring systems no longer suffice in spotting anomalies and predicting future IT outages.

However, the infusion of artificial intelligence in IT operations improves infrastructure and application performance, reliability, and uptime by predicting and preventing business-critical outages, while also reducing operations and maintenance expenditures.

Machine learning algorithms can analyze the past incident data and predict and resolve potential future incidents. This significantly helps improve key metrics like MTTR, MTBF, MTTF, MTTA.

In addition, historical utilization trends of critical infrastructure resources are studied to predict when an infrastructure device will reach full capacity, ensuring more capacity can be added automatically or through manual intervention to avoid business outages due to capacity constraints.

Use Case #2: Anomaly/Threat Detection

One of the major functions of AIOps is to ensure the security of the IT infrastructure. As a majority of organizations function in the hybrid setup, with hundreds of applications running on-cloud and in on-premise data centers, it becomes increasingly tough to monitor such a vast environment.

AIOps leverages complex algorithms to detect botnets, scripts, and other threats in real-time, even those which are multi-vectored and layered, ensuring reduced network downtime and continuity in business service.

Use Case #3: Root Cause Analysis

AIOps tools can not only detect anomalies but also investigate the root cause of issues and develop relationships among abnormal incidents. This enables early detection and diagnosis of IT issues. IT teams will have improved visibility into correlation between incidents and better information about the primary cause. This in turn, helps reduce MTTA/R significantly.

Use Case #4: Event Correlation And Noise Reduction

As mentioned above, the smallest of IT incidents can trigger thousands of alerts, tickets, and events. According to a report by AIOps Exchange, 40% of organizations are flooded with more than 1 million alerts per day. AI facilitates temporal association detection, discovering co-related logs, and combining events into a small number of logical groups.

Such noise reduction eases the burden off IT staff and enhances productivity by allowing them to look at a few critical incidents, instead of a large stream of insignificant events.

Use Case #5: Intelligent Escalation

After root-cause analysis is complete and issues are captured, AIOps tools route such incidents to the relevant human experts for swift remediation. They automatically set a remediation workflow, in motion that enables issue resolution, even before human involvement.

After root-cause alerts and issues are identified, IT ops teams are using artificial intelligence to automatically notify subject matter experts or teams of incidents for faster remediation. Artificial intelligence can act like a routing system, immediately setting the remediation workflow in motion before a human being ever gets involved.

Use Case #6: Capacity Planning

As seen above, IT leverages advanced forecasting techniques, such as time-series forecasting to analyze historical usage and bandwidth, and predict future usage values, such as network throughput, server size, memory, etc. By predicting the usage in advance, AIOps enables organizations to purchase additional capacity and reserve instances to cope up with the demand in advance, leading to large cost savings.

Moreover, an estimation of the number of service tickets expected in the future facilities capacity building and resource allocation, allowing organizations to employ requisite number of service desk personnel within stipulated budgets.

Get Started

Functioning as the backbone of modern digital transformations, AI lets organizations survive and thrive in today’s data-heavy and highly componentized IT landscape.

AIOps is an emerging solution that accurately predicts issues before they happen, locates anomalies in real-time, and reduces the mean-time-to-respond (MTTR) for incidents.

It saves time, money, and resources by significantly accelerating root-cause analysis and remediation and improves customer confidence and employee morale by avoiding downtime and maintaining operational continuity. Most importantly, it reinforces the role of IT as a strategic enabler of business growth.

For further insights, you might be interested in Acuvate’s AI-driven managed services (AiDMS) which is helping CIOs reduce organizations service desk costs, optimize cloud spending and automate and enhance IT operations through analytics and machine learning (ML).

If you’d like to learn more about this topic or planning to implement data analytics in your organization, please feel free to get in touch with our data analytics and oil and gas experts for a personalized consultation. source

Don’t forget to give us your ? !


6 Major AI Use Cases In IT Operations was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/6-major-ai-use-cases-in-it-operations-c9b8db04e441?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/6-major-ai-use-cases-in-it-operations

Loglet Analysis: Revisiting COVID-19 Projections

We will show that the decomposition of growth into S-shaped logistic components also known as Loglet analysis, is more accurate as it takes into account the evolution of multiple covid waves.

Originally from KDnuggets https://ift.tt/3oXn5DM

source https://365datascience.weebly.com/the-best-data-science-blog-2020/loglet-analysis-revisiting-covid-19-projections

KDnuggets News 21:n03 Jan 20: K-Means 8x faster 27x lower error than Scikit-learn in 25 lines; Essential Math for Data Science: Information Theory

Here is a clever method of getting K-Means 8x faster, 27x lower error than Scikit-learn; Understand information theory you need for Data Science; Learn how to do cleaner data analysis with pandas using pipes; What are the four jobs of the data scientist? and more

Originally from KDnuggets https://ift.tt/3p5zu8M

source https://365datascience.weebly.com/the-best-data-science-blog-2020/kdnuggets-news-21n03-jan-20-k-means-8x-faster-27x-lower-error-than-scikit-learn-in-25-lines-essential-math-for-data-science-information-theory

Why Do AI Projects Fail Too Often?

Photo by Lukas from Pexels

Potentially, the absolute majority of businesses can integrate AI into their operations. Despite the enthusiasm, though, many still fail to receive sufficient ROI from their AI implementations.

Most commonly, the problem isn’t in AI itself but the strategy behind its integration. For example, even the most outstanding AI model won’t ever realize its full potential if it can’t properly communicate with existing systems. Now, let’s define the most common reasons that make AI projects fail.

Team Composition

Some companies struggle to fathom the idea of AI implementation being a never-ending project that often requires all parts of their organizations to change and adapt. This includes rethinking the team composition and making sure all tracks of AI development are interconnected and in sync.

Big Data Jobs

This calls for thoroughly modified methods for building reliable AI. The two most important questions that need to be answered at this stage are: ‘What type of talent do we need?’ and ‘How will they communicate with each other?’

You’ll need to cover all the areas of the AI lifecycle, from AI design to deployment to monitoring, with the right talent. The employees will range from very narrowly-specialized data scientists to AI ethicists, who all need to work closely with each other in order to achieve success.

Data Strategy

You’ve probably heard it: data is one of the most important factors of AI success. Feed AI with unqualified data and it will come up with inaccurate decisions. Here at Iflexion we consider data cleaning as a top priority on any AI development project.

Trending AI Articles:

1. Preparing for the Great Reset and The Future of Work in the New Normal

2. Feature Scaling in Machine Learning

3. Understanding Confusion Matrix

4. 8 Myths About AI in the Workplace

Data Governance

Companies often assume that data cleaning implies that their historical data needs to be thoroughly checked and reorganized yet only once. Given that in most cases AI needs both historical and real-time data to make accurate decisions, companies need to completely reimagine their data governance to make data preprocessing continuous.

As organizations have a multitude of data sources, creating a framework that could properly clean, sort, and ingest every necessary bit of information is overwhelming. However, in most cases, not all incoming data is useful. This is why it’s critical to find out exactly what type of data you need in order to achieve your specific AI goal, and then collect, clean, and process only relevant datasets.

Executives often mention the unexpected need for additional investment in the midst of AI development as a significant roadblock. Conveniently, establishing smart data governance frameworks ensures that no resources will be wasted on cleaning datasets that will never be used.

Data Bias

While today we have an avalanche of specialized tools and well-defined approaches to data governance as a whole, companies still struggle to overcome data bias. The consequences of biased AI algorithms include significant economic losses and damaged brand reputation.

Even with perfectly balanced datasets that take into account all possible parameters, AI algorithms can still find unwanted links between those attributes and make biased decisions. Currently, there is no other way of dealing with this problem rather than reassessing training data continuously, with regular human evaluation of the decisions taken by AI.

In this sense, treat this technology like a child: it grows, learns, and makes decisions on its own, but when you stop paying attention to its development, it goes astray easily.

Flexibility and Scalability

At this point, it should be clear that AI is an all-permeating system that requires continuous improvement and monitoring. In a nutshell, any new variable in a company’s business environment can significantly lower the reliability of AI decision-making, while this company’s production environment might simply be not agile enough to reconfigure the AI system accordingly and in time to avoid poor outputs.

Predicting these changes is never easy, but many of these scenarios can be modeled. When companies start simulating the behavior of their AI system under certain conditions, many scalability issues become apparent. For example, a sudden bump in computing demand can significantly hinder the decision-making accuracy of your AI model. Again, considering the pervasive impact of AI on companies’ infrastructure, it’s often other IT systems that are struggling to scale, rather than an AI model itself.

Closing Thoughts

At this point, the opportunities offered by AI are too significant to ignore. However, rushing into adopting AI rarely brings positive results.

Just finding an experienced technology partner and developing an outstanding proof of concept might be solid first steps for conventional software development projects, but not those dealing with AI. Too many companies are getting caught in the loop of creating perfect AI algorithms, but few pay enough attention to their successful integration into business operations.

Don’t forget to give us your ? !


Why Do AI Projects Fail Too Often? was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/why-do-ai-projects-fail-too-often-4c4f973bb195?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/why-do-ai-projects-fail-too-often

Big Data in Energy: Possibilities and Limitations

Photo by Flickr from Pexels

As big data continues to disrupt almost every imaginable industry, the energy sector has finally started to catch up. With the recent advancements in IoT, AI and cloud computing, opportunities for more efficient energy consumption and distribution have been opened up. In this article, we explore the applications and limitations of big data in the energy context and discuss how companies can make this digital transformation a reality.

Predictive Maintenance

Equipment manufacturers and process plant operators always struggle to keep their machinery working as efficiently as possible while also keeping an eye on potential failures. In the context of the energy industry, equipment failure often leads to disastrous outcomes, including power outages that impact thousands of people and incur huge economic losses.

With the powerful symbiosis of IoT and machine learning, most industrial equipment units including assembly robots and maintenance vehicles can periodically send information about their condition to a centralized unit, making equipment failure much more predictable. ML-enabled software can analyze hundreds of data points including the age of the machine, its model type, repair history logs, thermodynamic and acoustic information, and turn it into a comprehensive report.

Big Data Jobs

Given that there are thousands of such machines operating at the same time, advanced methods of data collection and analysis is required. This is why energy companies are turning to Hadoop consultants to streamline their data management.

Smart Supply and Demand Management

Currently, big data analytics is the most important prerequisite for accurate load forecasting. Again, with IoT and AI, companies can predict energy consumption levels based on historical data of energy usage, geographical location, weather, and energy prices. This is a win-win for both the environment and energy companies as maintenance costs are significantly reduced and carbon emissions are lowered.

On the other side of the energy supply chain, consumers can also better monitor their consumption and adapt to fluctuations accordingly. For example, with the help of Nest, a thermostat developed by Google, homeowners can monitor energy consumption and adjust their controls to achieve more cost-efficient energy use.

Trending AI Articles:

1. Preparing for the Great Reset and The Future of Work in the New Normal

2. Feature Scaling in Machine Learning

3. Understanding Confusion Matrix

4. 8 Myths About AI in the Workplace

In addition, businesses that have energy management high on their priority list can also benefit from advanced energy monitoring enabled by big data. For example, horticultural companies can predict when more airflow is needed based on smart sensors installed in greenhouses. When the temperature is about to rise, cooling vents can be automatically opened. This would require an almost negligible amount of energy. Conventionally, companies would rely on thermostats, which trigger very energy-demanding air conditioners to cool the room. On a bigger scale, such initiatives would also significantly lower carbon emissions.

Energy-exclusive Challenges

As with many data-based initiatives, the implications are extremely promising, but only a few companies can achieve long-term success. While digital transformation has become the name of the game for many organizations that are trying to achieve operational efficiency at scale, this notion becomes much more relevant in the energy sector.

Energy companies have to face a very unique set of challenges, though.

First, energy companies are very dependent on environmental conditions. For better or worse, humans can’t control wind dynamics, sunlight, or fossil power thermodynamics, for example. This significantly hinders companies’ ability to prove that these AI-based initiatives are reliable and account for all the challenges posed by nature itself.

Second, energy companies must adhere to governmental regulations. While, for example, you can argue that every data-reliant business has to be GDPR-compliant, organizations that operate in the energy sector are at risk of taking peoples’ lives accidentally, not just disclosing someone’s personal data. That’s why any change in operations will always be accompanied by multiple rounds of regulatory processes that take a considerable amount of time.

How to Make It Work?

These are the steps that energy companies might take to prepare for digital transformation:

Data Preparation

First things first, data needs to be clean and organized. In the energy context, historical data is critically important. With the latest advancement in data analytics, collecting such data has become much more feasible than ever before. With OCR and NLP, decades-old logs of energy load, equipment maintenance, weather and meteorological data stored as text can be conveniently digitized, making it ready for analysis.

Asset Assessment

Next, it’s critical to perform health checks of energy assets and determine risks. There is no one-size-fits-all solution as every energy company has a unique set of factors. For example, some organizations will need to consider the proximity of energy assets to consumers, while others might need to consider how far the asset is from forests to determine the risk of catching fire. Asset evaluation is currently the most challenging aspect of the process as there is a multitude of variables involved and not every relationship between those variables is obvious.

Asset Replacement and Maintenance

To make utilities smart, the majority of energy assets have to be replaced or upgraded. For example, in many cases, it’s not economically efficient to upgrade old equipment that is close to being written-off. It’s better to let it complete its life cycle and install a newer model.

Workflow Reimagining

While it might be not so appealing to executives, applying big data in the energy sector almost always requires end-to-end revamp of the legacy workflows. For example, given that predictive maintenance is an essential part of the transformation, conventional manual equipment maintenance needs to be adjusted to accommodate it. This would call for retraining programs for maintenance professionals, too. As you can see, transforming workflows in such a large-scale industry as energy requires a solid long-term vision.

Conclusion

As with many other industries, the energy sector is on the verge of becoming fully data-driven. However, many challenges related to regulations and scalability emerge as we are moving forward. On a grander scale, a complete reimagining of workflows and significant technological upgrades are required.

Don’t forget to give us your ? !


Big Data in Energy: Possibilities and Limitations was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/big-data-in-energy-possibilities-and-limitations-974e1ac0572f?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/big-data-in-energy-possibilities-and-limitations

How to Deploy AI Models?Part 2 Setting up the Github For Herolu and Streamlit

How to Deploy AI Models? — Part 2 Setting up the Github For Herolu and Streamlit

Github is a platform which is available free and enables us to keep our codes in the cloud. Basically Github is a version control system it is used to keep logs of all the changes which are being made on the project. Github backbone is Git, it handles most of the operation of the Github. Github can be accessed by the Git CLI or Sourcetree (GUI Application). In Part -1 of this series (link) we have seen how to setup Github account with Git CLI in our local system.

You all will be thinking that why I am writing article on the Github and Git Sinstead of model deployment directly because our 90% problem will be solved automatically if we have our repository arranged in a specific way. Github has a very unique feature i.e. It is connected directly with many of the cloud platform such as heroku and streamlit -platform which provide free deployment of the python code using Flask or streamlit.

Before you start I will suggest you to register in Streamlin (link) and apply for an invitation from the community. In upcoming posts, we will be discussing model deployment on streamlit, so you will have access to your Stream lit account.

Big Data Jobs

Note : If you have set up the RSA authentication the use the SSH link else use the Https link of the repository

Without any delay lets start our journey ……

  1. Setting up Directory with the Git

In part 1 of this series, we have seen how we can connect our local system with the Github using RSA. By assuming that you have done the same we will start. First things first the first thing we have to do is we have to initialize the directory which we are going to upload in our Github repository using below command:

Command: git init

Trending AI Articles:

1. Preparing for the Great Reset and The Future of Work in the New Normal

2. Feature Scaling in Machine Learning

3. Understanding Confusion Matrix

4. 8 Myths About AI in the Workplace

The above command will create a git directory inside the our repository which will track all our action, image shows process, how we can initialize the repository.

Figure 1. Initialize the directory

2. Adding files to the track list

After the initialization of the directory next step is to add the files to the tracking list this can be done in many ways using but the efficient way is mentioned below.

Command: git add .

The above command will add files and folder to the tracking list inside the our repository which will track all files image shows process, how we can add files to the tracking list.

Figure 2. Initialize the directory

3. Committing the changes

After initialization and adding files in the tracking list. it’s time to save all the changes we have made in different files using commit

Command: git commit -m “Type your comments here”

This command will commit all the changes we have made to keep track of the difference with respect to the previous version of the file in .git directory. -m refers to the message we want to commit the file with.

Figure 3. Commiting Changes

4. Checking the status of the Repo

Sometimes we need to check the status of our repository that what files had been added or removed which files are being tracked which are not and many other information. It can be done by below command.

Command: git status

Figure 4. Commiting Changes

5. Pushing to the Github Repo

Pushing the repository is one of the frequently used commands in Git. It enables the respective person to push the code in the github repository. Initialize the directory with git init and commit the changes before pulling the respective repository. Create the blank repository in the Github before-hand.

Step 1:Creating Github Repo

Command : echo “# Demo” >> README.md

Step 2: Adding files to tract list the local repo and committing it

Command : git add .
git commit -m “first commit”

Step 3: Adding remote repository

Command : git remote add origin <repo link>

Step 4: Pushing the repository

Command : git push -u origin master

Figure 5. Pushing files to Github

6. Pulling from the Github Repo

Pulling the repository is one of the frequently used commands in Git. It enables the respective person to pull the code or other repository from different or from his on the repository. Initialize the directory with git init before pulling the respective repository.

Command: git pull<url of the repo>

Figure 6. Pulling the Repository

7. Cloning repo from the Github directory

Sometime we need to clone someone else repository for several purposes, git allow us to copy the full content of the repository in our local repository.

Command: git clone <url of the repo>

Figure 7. Cloning Repository

In the next article, we see how we can deploy the trained machine learning model in Heroku using our Github repository.

Special Thanks:

As we say “Car is useless if it doesn’t have a good engine” similarly student is useless without proper guidance and motivation. I will like to thank my Guru as well as my Idol “Dr. P. Supraja”- guided me throughout the journey, from the bottom of my heart. As a Guru, she has lighted the best available path for me, motivated me whenever I encountered failure or roadblock- without her support and motivation this was an impossible task for me.

If you have any query feel free to contact me with any of the -below mentioned options:

Website: www.rstiwari.com

Medium: https://tiwari11-rst.medium.com

Google Form: https://forms.gle/mhDYQKQJKtAKP78V7

Reference:

Extract installed packages and version : Article Link.

Installation of Git: Link,

Git Documentation: Link

Github Documentation: Link

Notebook Link Extract installed packages and version : Notebook Link

YouTube : Link

Don’t forget to give us your ? !


How to Deploy AI Models? — Part 2 Setting up the Github For Herolu and Streamlit was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/how-to-deploy-ai-models-part-2-setting-up-the-github-for-herolu-and-streamlit-9bf5b847eb97?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/how-to-deploy-ai-modelspart-2-setting-up-the-github-for-herolu-and-streamlit

Design a site like this with WordPress.com
Get started