Seeing the Big Picture: MapReduce, Hadoop and the Cloud

Big data contains patterns and methods to inform companies about their customers and vendors, as well as help improve their business processes. Some of the biggest companies in the world like Facebook have used MapReduce framework as a tool for their cloud computing applications sometimes through implementing Hadoop, an open source code of MapReduce. MapReduce was designed by Google for parallel distributed computing of big data.

Before MapReduce, companies needed to pay data modelers and buy supercomputers to process timely big data insights. MapReduce has been an important development in helping businesses solve complex problems across big data sets like determining the optimal price for products, understanding the return on the investment of advertising, performing long term predictions and mining web clicks to inform product and service development.

No alt text provided for this image

MapReduce works across a network of low-cost commodity machines allowing actionable business insights to be more accessible than ever before. It is strong computation tool for solving problems that involve things like pattern matching, social network analysis, log analysis and clustering.

The logic behind MapReduce is basically dividing big problems into small manageable tasks that are then distributed to hundreds of thousands of server nodes. The server nodes operate in parallel to generate results. From a programming standpoint, this involves writing a map script where the data is mapped into a collection of key value pairs and writing a reduce script over all pairs with the same key. One challenge is the time it takes to convert and break the data into the new key-value pair which increases latency.

No alt text provided for this image

Hadoop is Apache’s open-source implementation of the MapReduce framework. In addition to the MapReduce distributed processing layer, Hadoop uses HDFS for reliable storage, YARN for resource management and has flexibility in dealing with structured and unstructured data. New nodes can be added easily to Hadoop without downtime and if a machine goes down, data can be easily retrieved. Hadoop can be a cost efficient solution for big data processing, allow terabytes of data to be analyzed within minutes. 

But, cloud platforms like Amazon Web Services (AWS), Microsoft Azure, and Google’s Cloud Platform offer similar MapReduce components where the operational complexity is handled by the cloud vendors instead of the individual businesses. Hadoop was known for its strong combination of computation with storage, but in place of HDFS, cloud-based object stores have been built on models like AWS which given the ability to still compute and use virtualization technology like Kubernetes instead of YARN. With this shift to cloud vendors, there have been some increased concerns around the long-term vision for Hadoop. 

No alt text provided for this image

Hortonworks was the data software company that supported open-source software, primarily Hadoop. But in January 2019, Hortonworks closed an all-stock $5.2 billion merger with Cloudera. While Cloudera also supports open source Hadoop, it has a different vendor-lock management suite that is supposed to help with both installation and deployment whereas Hortonworks was 100% open-source. In May 2109, another Hadoop provider, MapR, announced they were looking for a new source of funding. On June 6, 2019, Cloudera’s stock declined 43% and the CEO left the company.

Understanding the advantages and disadvantages of the MapReduce framework and Hadoop in big data analytics is helpful to making informed business decisions as this field continues to evolve. In terms of the drawbacks of Hadoop, Monte Zwebe, the CEO of Splice Machine, that creates relational databases for Hadoop says, “When we need to transport ourselves to another location and need a vehicle, we go and buy a car. We don’t buy a suspension system, a fuel injector, and a bunch of axles and put the whole thing together, so to speak. We don’t go get the bill of materials.”

What do you think? Please DM me or leave your feedback in the comments below.

#Hadoop #MapReduce #CloudComputing

Consequences of Multiplying the Internet of Things

The Internet of Things (IoT) are multiplying as technology costs decrease and smart device sales increase. Generally speaking, if there is a device with an on and off switch, there is a likely chance that it will become a future part of the IoT movement. IoT architecture includes the sensors on devices, the Internet, and the people that use the applications.

IoT devices are connected through Internet infrastructure and different wireless networks. Smart devices by themselves are not that good at dealing with massive amounts of data, let alone learning from the data received and generated. Currently the data from IoT devices is relatively basic because of the small computing power and limited capacity to store data on most devices. However, that basic data gets transferred to a data processing center that has more advanced computing capability to produce desired business insights.

No alt text provided for this image

IoT smart devices require unique addresses that allow them to connect on the Internet. There are some challenges as it relates to accessing these new places on the Internet with growing amount of smart devices. Internet Protocol version 4 (IPv4) has the capacity for about 4.3 billion addresses. Gartner estimates that by 2020, the world will have over 26 billion connected devices. However, there are several thought-leaders as it relates to a unified addressing scheme for IoT that may help solve this bottleneck.

IoT applications also have bottlenecks around the quality of the current artificial intelligence algorithms. For example, having increased transparency and reduced bias around algorithms continues to peak the interest of citizens and could pose challenges to some proprietary business models. With machine learning, producing training sets that are actually representative of targeted populations also remains a challenge.

No alt text provided for this image

There are some additional obstacles as it relates to the physical path of the transmission media. For example, IoT can receive or transmit data based on a variety of technology from RFID to Bluetooth. The common problems associated with these kinds of transmission media from bandwidth to interference also creates problems for IoT. Trying to optimize transmission media is a challenge in IoT applications as it relates to supporting and sustaining networks.

Security is also an ongoing concern to IoT since the basic data feeds into a receiver on the internet. Many IoT devices are low powered constrained devices making them more susceptible to attack. Security challenges of IoT include the ability to ensure that the data has not been changed during transmission and protecting data from unwanted exposure. The World Economic Forum estimates that if a single cloud provider was successfully attacked, it could cause $50 billion to $120 billion of economic damage. With the growth of poorly-protected devices on a shared infrastructure, there is a wide attack surface for hackers where IoT botnets could send swarms of connected sensors information through a variety of IoT devices like thermometers, sprinklers and other devices.  A recent State of IoT Security Research report shared that 96 percent of businesses and 90 percent of customers think there should be IoT security regulations. As public confidence decreases in security while IoT sales increase, this is likely to result in regulatory reform.

No alt text provided for this image

IoT allows businesses to solve problems and even delight their customers by leveraging the intelligence of connected devices. While there is always uncertainty and risk involved with new technology, and customer confidence around IoT may be hit or miss, the promise of IoT is a fully connected world where devices connect together and with people to enable action that has never before been possible.

#IoT #Cybersecurity #BigData

Web Evolution and Eliminating Performance Bottlenecks

If the Internet is a bookstore, the World Wide Web is the collection of books within that store. The Web is a collection of information which can be accessed via the Internet. The Web was created in 1989 by Sir Tim Berners-Lee and remained quiet through the 1990s, but as users increased, companies like Google started to develop algorithms to better index content which eventually lead to the concept of SEO (a significant driver of the Internet today). Sir Tim Berners-Lee’s initial vision of the Web was explained in a document called, “Information Management: A Proposal,” but today with Facebook and social media, the focus has also changed the Web into a communication tool. 

Back in 1989, Sir Tim Berners-Lee wrote about three fundamental technologies that are still foundational to the Web today which include HTML, URI, and HTTP. HTML refers to the markup language of the Web, URI is like the address or URL, and HTTP supports the retrieval of linked items across the Web. These core technologies used in Web 1.0 are responsible for today’s large-scale web data. Back in Web 1.0, bottlenecks included web pages that were only understandable by a human. Also, Web 1.0 was slow and pages that needed to be refreshed often. In retrospect, it is easier to identify that Web 1.0 had servers as a major bottleneck and lacked a sound systems design with networked elements. Nonetheless, Web 1.0 is referred to as the “web of content” and was critical to the development of Web 2.0.

No alt text provided for this image

Web 2.0 began in 1999 and let people contribute, modify, and aggregate content using a variety of applications from blogs to wikis. This was revolutionary in the sense the web moved from being focused on content to being focused on the communication space, where content was created by individual users instead of just being produced for individual users. Web 2.0 embraced the reuse of collective information, crowdsourcing, and new methods for data aggregation. In terms of online architecture, Web 2.0 drove collaborative knowledge construction where networking became more critical to driving user interaction. At the same time, issues of open access and reuse of free data started to surface. Performance issues were encountered with frequent database access, which put a strain on Web 2.0’s scalability. However, the good news is that Web 1.0 bottlenecks on the database server side were eliminated with the ability to have databases on ramdisk and high-performance multi-core processors that supported enhanced multi-threading. However, with the benefits of Web 2.0’s flexible web design, creative reuse, and collaborative content development, bottlenecks were created by the increased volume of content by users.

Web 3.0 started around 2003 and was termed the “the web of context.” Web 3.0 is the era of defined data structures and the linking of data to support knowledge searching and automation across a variety of applications. Web 3.0 is also still referred to as the “semantic” Web, which was revolutionary in the sense that it shifted to focus to have the Web not only read by people, but also by machines. In this spirit, different models of data representation surfaced, like the concept of nodes, which lead to the scaling of web data. One of the challenges of the Web 3.0 data models was that the location and extraction processes turned into a bottleneck.

No alt text provided for this image

Web 4.0 began around 2012 and was named the “web of things.” Web 4.0 further evolved the concept of the Web into a symbiotic web that focused more on the intersection of machines and humans. At this point, Internet of Things devices, smart home and health monitoring devices started to contribute to big data. Mobile devices and wireless connections helped support data generation, and cloud computing took a stronghold in helping users both create and control their data. However, bottlenecks were created with the multiple devices, gadgets and applications that were connected to Web 4.0 along with changing Internet of Things protocols and exponentially growing big data logs.

No alt text provided for this image

Web 5.0 is currently referred to as the “symbiont web” or the web of thoughts. It was designed in a decentralized manner where devices could start to find other interconnected devices. Web 5.0 creates personal servers for personal data on information stored on a smart device like a phone, tablet, or robot. This enables the smart device to scan the 3D virtual environment and use artificial intelligence to better support the user. The bottleneck in Web 5.0 becomes the memory and calculation power of each interconnected smart device to calculate the billions of data points needed for artificial intelligence. Web 5.0 is recognized for emotional integration between humans and computers. However, the algorithms involved in understanding and predicting people’s behavior have also created a bottleneck for Web 5.0.

Where will Web evolution end? One thing is for sure, data generation is increasing year after year. To continue to get new functionality out of the evolving Web, new bottlenecks need to be addressed. There are a variety of future considerations as it relates to anticipated bottlenecks from encoding strategies to improving querying performance. However, the best way to predict what will happen in the future is to invent it.

#WebEvolution #WebPerformance #OnlineArchitecture #Innovation

About the Author

Shannon Block is an entrepreneur, mother and proud member of the global community. Her educational background includes a B.S. in Physics and B.S. in Applied Mathematics from George Washington University, M.S. in Physics from Tufts University and she is currently completing her Doctorate in Computer Science. She has been the CEO of both for-profit and non-profit organizations. Follow her on Twitter @ShannonBlock or connect with her on LinkedIn.

Detecting Healthcare Fraud using Machine Learning

As the elderly populations rise, so does medical care costs that come with treating those that need to be served. Medicare provides insurance to those 65 and older to help with the financial burden of healthcare. Medicare costs about $588 billion and is expected to increase by 18% in the next decade. Healthcare fraud is estimated by NHCAA to be as much as 10% of the nation’s total healthcare spend, or $58.8 billion. Fraudulent claims include both patient abuse or neglect, as well as billing for services that were not received. By using publicly available claims data, machine learning can be used to help detect fraud in the Medicare system helping reduce the cost to taxpayers.

Machine learning is a subset of artificial intelligence that can find a fraudulent needle in the haystack by applying continuous learning algorithms. With each instance that the algorithm is right about a fraudulent transaction, that information goes back into the equation, making it smarter. The same happens when the algorithm is wrong.

Using unsupervised machine learning on publicly available datasets is a growing trend with great potential. The publicly available Medicare claims data has 37 million cases. In machine learning, an essential part of the process is labeling as it affects both the data quality and the performance of the model. Different researchers have created the labels for fraud and non-fraud by mapping the data with other publicly available resources like the National Provider Identifier and List of Excluded Individuals and Entities database. The 37 million cases can then be reduced to under 4 million that can be run through the machine learning algorithm to help identify fraudulent providers.

For example, unsupervised machine learning has been used successfully on Florida’s Medicare data to detect anomalies in Medicare payments using regression techniques and Bayesian modeling. Also, decision tree and logistic regression with random undersampling class distributions have provided some promising results. Initial results have indicated that having more non-fraud cases has helped the model learn better and produce more accurate results between fraud and non-fraud cases.

Using machine learning to detect fraud is game-changing. Machine learning allows humans to be notified early on in the fraud attempt, stopping losses earlier on in the process. Having a continuous look on publicly available data can go a long way in helping minimize fraudulent claims and accelerate the time to prosecute criminals. 

#BigData #MachineLearning #AI #Healthcare

Data Brokers Pay for Your Healthcare Information

A multi-billion dollar industry exists from the buying and selling of your healthcare data. Certain state exceptions under federal privacy rules allow hospital data to be sold to data brokers. Private companies are seeking to gain access to your medical records to advance their mission, but sometime also to make a quick buck.

The right of businesses to profit from health information without patient permission has been previously upheld by the United States Supreme Court. For example, in the 1990s, a data broker was selling data to some big pharmaceutical companies on what individual providers were prescribing to patients. These pharmaceutical companies then used that information to provide targeted marketing to prescribers for the purposes of increasing drug sales. However, once patients started to understand and voice their complaints, a couple of states passed legislation to limit the trade of prescriber specific information. But, the data broker objected so the case went to the Supreme Court and was won by the data broker on the grounds of free speech.

No alt text provided for this image

While the practice of buying and selling medical data is technically acceptable under the Health Insurance and Portability and Accountability Act (HIPPA) because the data is supposed to be anonymous, one of the challenges with the increasing number of these deals is patient privacy is at risk since it is easier now to piece together deidentified records using unstructured data sources like Facebook, Twitter and other social media platforms.  

However, it is also important to note that not all data brokers have misguided intent. There are many organizations in this space with honorable missions. For example, Sloan Kettering made a deal to sell pathology samples to Paige.AI to develop artificial intelligence to help in finding a cure to cancer. In the case of curing cancer, the patient’s medical data is being used to increase the quality of care. However, data brokers do not currently have any fiduciary responsibilities to patients. 

There are some considerations that health systems can put in place to help reinforce ethical best practices:

1.  Only enter into a data transfer deal if it benefits patients

2.  Have a separate agreement form from the consent form that patients complete for their normal healthcare

3.  Asking the patient for permission to sell their data should be done by the third party vendor to ensure that there is no misunderstanding or abuse of the patient/provider relationship

4.  Any default consent options should be that patients do not elect to have their data sold

5.  Consent language should be worded in an easy to understand fashion and potentially in video form for so that patients can clearly understand usage, risks, and their options

6.  Transparency should be provided to the patients and healthcare staff on how the records are being used, who owns the data, and in what way it will be used, especially if there is a financial gain for the health system

Last year GlaxoSmithKline, a large pharmaceutical company came under global scrutiny when they tried to invest $300 million in 23andMe, due to concerns around lack of transparency of what data was being shared combined with the lack of choice for patients to participate.

Given that researchers predict that healthcare data will grow faster than in manufacturing, financial services, or media experiencing a compound annual growth rate of 36 percent through 2025, these issues are likely to continue to surface for governing bodies as well as public policy influencers. 

What has been your experience with data brokers? How do you think this will play out in the future?

#AI #BigData #BioEthics #Healthcare

Data Breaches Cost Healthcare $408 per Record: How to Prevent the Pain

No alt text provided for this image

According to the federal government in June 2019, there were 3.5 million people’s data exposed in healthcare data breaches that were reported. The majority of that data breach was from Dominion National that claims the incident may have started as early as April 2010. The data accessed included access enrollment, demographic data, and associated dental and vision information. Similarly, LabCorp and Quest Diagnostics reported in June 2019 that there was a data breach from an unauthorized user that accessed their vendor payment system that affected nearly 8 million and 12 million patients, respectively. These alarming numbers do not even include encrypted data that is lost by organizations since HIPAA does not consider the loss of encrypted data a breach. The United States healthcare system as a whole lost $6.2 billion in 2016 from data breaches with the average data breach costing a company $2.2 million. Research from IBM Security found that in 2018, the cost to healthcare organizations was $408 per record, up from $380 per record in 2017.

According to a HIMSS 2019 Cybersecurity Survey, 59 percent of all data breaches in the past 12 months started with phishing, or when an attacker masquerades as another reputable person in an email or other communications. Cybercriminals also often change their approach and are now increasingly using techniques powered by artificial intelligence. In response, healthcare organizations are actively deploying artificial intelligence solutions to combat suspicious activities, as well as increasing employee education and cloud-based security. 

No alt text provided for this image

There are some basic techniques that healthcare organizations should be deploying in addition to conducting risk assessments and providing employee education. For example, healthcare organizations should:

  • Take time to understand cloud service-level agreements, retain ownership of data that can be accessed in the event of a crash, and ensure service-level agreements comply with state privacy laws
  • Establish subnet wireless networks for guests and other public types of activity
  • Use multi-factor authentication on employee devices
  • Use business association agreements to help distribute risk and clarify vendor reporting requirements
  • Have a “bring your own device policy” based on current best practices like having a complex password requirements and policies that can be enforced
  • Plan for the unexpected in thinking about how long the healthcare organization can function in different areas without data, while also having an emergency solution for back-up information and data restoration

These tips can be incorporated into the organization’s cybersecurity framework. There are benefits to thinking through some of these strategies before they are mandated to have an effective cyber-defense program that protects both patients and the organization.

#Cybersecurity #AI #BigData #Healthcare

The Future of Open Infrastructure: OpenStack Cloud Computing Platform

No alt text provided for this image

OpenStack is an open-source cloud operating system that is relatively simple to install and provides massive scalability in helping organizations move towards enterprise-wide interdepartmental operations. Providing a stable foundation for both public and private clouds, OpenStack offers plug and play components with “at a glance” visualizations of how different parts work together. Their dashboard feature gives control to administrators while allowing users to provide resources through a web interface. OpenStack’s platform enables the deployment of container resources on a single network. It is one of the fastest growing solutions for building and managing cloud computing platforms with over 500 customers like Target, T-Mobile, Workday, American Express, GAP, Nike, and American Airlines.  

While there can be additional costs for specific versions, it is free to sign up for a public cloud trial: https://www.openstack.org/passport/

After installing OpenStack, DevStack can be used to understand better dashboard functionality as well as providing insight to contributors wanting to test against a complete local environment: https://docs.openstack.org/devstack/latest/

Free training on OpenStack is also available helping people master and adopt OpenStack technology: https://www.openstack.org/marketplace/training/

While the self-service is possible, should you choose to use a vendor for OpenStack management, a few key questions to ask potential vendors include:

  • Can you be specific on how you can help my company support an OpenStack deployment?
  • Can you share what kind of workloads has your OpenStack distribution supported in the past?
  • What kind of flexibility is incorporated in your OpenStack solution?
  • What kind of cost reductions should be anticipated from deploying an OpenStack infrastructure?

Do you have experience with OpenStack? If so, please share your experience with me via DM or in the comments.

#OpenStack #CloudInfrastructure #BigData

Who Runs the World? Amazon Web Services

If you think that most of Amazon’s operating income comes from those packages they deliver so fast on your doorstep after a click of a button, you’d be wrong. Amazon earns billions from its cloud platform, Amazon Web Services (AWS) that has benefited from a more interconnected world where transactions are exponentially increasing in volume. 

No alt text provided for this image

With a growing need to better store, verify and secure transactions, AWS allows businesses to run web and application servers in the cloud, securely store files on the cloud, use management databases like MySQL, Oracle and SQL Server to store information and deliver files quickly using a content delivery network. In short, AWS is core to Amazon’s business model and helps with database storage, content delivery, and computation power. It has been around for 13 years and offers 165 fully featured services across 21 geographic regions and is used by over 1 million customers like Netflix, Airbnb, Johnson & Johnson, Lyft, CapitalOne, and General Electric. 

For developers that may not have prior experience with things like machine learning, artificial intelligence, the Internet of Things and augmented reality, AWS provides an easy solution. For example, it has features like Amazon Personalize that allow developers to add custom machine learning models including product recommendations, search results, and direct marketing. The Amazon Personalize API uses algorithms that are used in Amazon’s own retail business. 

No alt text provided for this image

Some of the benefits of AWS include low-cost services, ease of use, versatile storage and reliability. However, there are a few security limitations, technical support fees, and the product faces general issues associated with cloud computing such as limited control, downtime, and backup protection. However, many of the disadvantages of AWS can be easily overcome or mitigated, making Amazon Web Services a leader in cloud platforms.  

For those wanted to test out Amazon Web Services, it can be downloaded for free: https://aws.amazon.com/getting-started/ 

Also, Amazon also offers several free trainings:

AWS Cloud Practitioner Essentials https://www.aws.training/learningobject/curriculum?id=16357

AWS Machine Learning Services https://www.aws.training/learningobject/video?id=16207

AWS Analytics Services Overview https://www.aws.training/learningobject/video?id=16202

Have you used Amazon Web Services?  What has been your experience?

#AWS #CloudPlatform #MachineLearning #ArtificialIntelligence

A Refresher on Board Governance

No alt text provided for this image

Master Yoda shares, “Always pass on what you have learned.” While we may hope the boardroom is full of Yodas, the reality is that the boardroom is always changing and there are best practices around boardroom governance for a reason. There is a constant balance that governing bodies face in pursuing business opportunities while maintaining accountability and ethical integrity. In 2007-2008, the global financial crisis put a heightened sense of urgency on the needs for improved ethical frameworks and governance for businesses. Good governance is the heart of any successful company. Enterprise governance needs to balance the economic and social pressures as well as take into consideration the viewpoints of different stakeholders from individuals to collective groups. A governance framework is utilized to support the efficient use of resources as well as to formalize accountability for the stewardship of those resources. The goal of enterprise governance is to help align the interests of individuals, businesses, and society in achieving business objectives. Ethical considerations are important for enterprises not only because of negative pressures from situations like the 2007-2008 global financial crisis, but also because ethical behavior and corporate social responsibilities can bring significant benefits to organizations. Three examples that show how governance can impact organizations include:

•   The Passenger Rail Agency of South Africa had a situation where the acting CEO was fired by the board, and then the Minister of Transport dissolved the board. The reports said that this was an issue where the board was undermined and not accountable to the shareholders. 

•   In another example, Innovations Theatre, which has been in operation for two decades, had a very large board that was focused on board development and future visioning. The board consisted of “white-skin and white-collar” board members representing lots of corporate sponsors. Parallel to this governance board, there was another corporate board that represented even more businesses. 

•   A third example includes the Foster Dance Troup that teaches dance in the inner city. The Dance Troup had a founder in charge for two decades but died a few years ago. The board was faced with more responsibility, and the current structure included an emphasis on committee reports.

In the first example provided, there was a political issue where the shareholders did not seem to be involved in the governance process. In the second example, the board’s lack of diversity may raise some eyebrows as it relates to community support. Also, the size of the board was too large, with over-dependence on one leader. In the last example with the Dance Troop, the board was in early development stages since it lost the founder which refocused the mission, as well as the structure of the organization. This is a situation where the board had an opportunity to define more clear roles and responsibilities, as well as the distinction between board and staff.

   These examples have common themes that are essential to board effectiveness including having a strong board chair, clear roles and responsibilities of board members, CEO that acts as and is treated like a partner, and a board that can confront big questions. It is important for organizations to have strong governance systems because it increases the accountability of organizations, helps avoid disasters before they happen, and moves businesses towards their mission, while maintaining critical legal and ethical standing.

           Have you been involved in any similar experiences? How did you deal with the complex situation? What do you think is critical for good governance?

No alt text provided for this image

#BoardGovernance

AI vs. IoT: What’s the Difference?

No alt text provided for this image

While Artificial Intelligence (AI) and the Internet of Things (IoT) are both hot topics, they are not the same. They have differences but at the same time are connected and related. Artificial intelligence is a type of science that works to imitate intelligent behavior in computers.  Internet of things is the internet-working of devices like homes, sensors, cars, and home appliances that can communicate together and often with the external environment like other cars, devices and human beings. 

Some of the differences between AI and IoT include interaction with cloud computing, scalability, cost, and ability to learn from data. For example, with cloud computing, IoT generates significant amounts of data and cloud computing provides a pathway for that data. On the other hand, AI intersects with cloud computing in the sense that it allows the devices to act and react in a way more similar to the human experience. 

In terms of learning from data, in IoT, there can be multiple sensors, and each has some set of processes where identical information is shared on the internet, but in AI, the system actual learns from the activities or errors occurring to try to evolve into a better version of itself. As it relates to cost, IoT generally costs much less than $50K USD with all components involved from hardware to infrastructure, whereas with AI the charges are typically are calculated for each case and can vary substantially based on complexity and industry. 

No alt text provided for this image

IoT focuses on connecting machines and making use of the collected data while AI is about mimicking intelligent behavior in machines. As the devices powered by IoT continue to grow, AI can help by dealing with the big data by making sense of it. That being said, IoT can exist without AI. And, AI can exist without IoT. But, data is only useful to humans if it creates insights that can be acted upon. Using IoT and AI together create connected intelligence. 

A use case of IoT and AI working together is Tesla Motors self-driving cars. In this example, the car is the “thing,” and the power of AI is used to predict the behavior of a car in a variety of environments. The Telsa cars operate as a network meaning that when one car learns something, all the cars can then learn something.  

No alt text provided for this image

There are several data scientists that believe the future of IoT is in the AI. Undoubtably, when the two are combined the value delivered can increase for the customer, as well as the organization.

#BigData #IoT #AIA