Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!
Impact of GPU/AI chip economics. Commentary by AB Periasamy, co-founder & CEO of MinIO
“GPU scarcity is going to continue to push AI workloads to the cloud. While most enterprises and AI companies would rather run in a private cloud, the reality is that they are at least six months from available supply to do so. As a result, they will take the economic hit associated with moving the data and compute to one of the major public clouds. The silver lining here for 2024 is that the very GPU scarcity that is helping Nvidia achieve record revenues will drive software innovation and workload optimizations that will reduce future demand. This should have the effect of bringing down GPU pricing in the back half of 2024. Coupled with the arrival of competitive GPU offerings from AMD (a certainty) and Intel (very likely) may further depress prices and enable the next generation of workloads to run in the private cloud – away from prying eyes. The key will be in the software ecosystem. The savvy analyst knows that Nvidia’s chips are only half the story – the other half is Cuda. The open source community has some work to do to create an equivalent software stack – but there is power in numbers and 2024 will see the playing field level out on the GPU front – both on the hardware and software side. Once GPUs become more available and people understand those cloud economics, many will repatriate workloads to on-prem/private cloud where they can truly optimize and accelerate AI innovations.“
AI’s Pandora’s Box Is Already Open: Here’s How to Manage It. Commentary by K. Scott Griffith CEO and managing partner, SG Collaborative Solutions, LLC
“Artificial Intelligence is taking the world by storm, ushering in a new age of productivity and well-being. But AI also can cause great harm, both deliberate and accidental. For example, studies have shown AI imaging models demonstrate bias against historically underserved demographics in disease diagnosis. Or when a recent AI-generated legal brief referenced a fictional case in court. If left unconstrained, this harm will be irreversible. But if we wait for regulators to address the problem, we’ll be waiting for decades. We must take responsibility for harnessing AI’s positive potential, while doing more than we’ve done in the past to safeguard against harm. What’s the solution? A strategic, well-organized government/industry/labor collaboration is needed. An example of such a collaboration is the U.S. airline industry’s Commercial Aviation Safety Team (CAST), which has led to a 95% reduction in the fatal accident rate. This success depended on the FAA’s role as the independent oversight body protecting the public; industry’s understanding of the technical challenges; and labor’s role as the eyes and ears of the evolving risk in frontline operations. At a time when aviation was at the forefront of advancing automation technologies, CAST was able to successfully harness knowledge, skills, and abilities to manage emerging risks. The industry moved from being reactive to proactive, and plane crashes in the US became a rarity. We have the same opportunity to manage AI. Input and support must come from legislators, law enforcement, the research and educational communities, and consumer advocacy organizations. Private businesses must lead, instead of waiting until compliance becomes mandatory and burdensome. Regulators must move from compliance to risk-based oversight and labor must be incentivized to actively participate with economic and social benefits. Transparency and trust are key, while aligning legislation and the legal systems to better address the rapidly advancing challenge.”
The developing database market. Commentary by Percona founder Peter Zaitsev
“As vector data becomes more mainstream, dedicated, specialized vector databases are emerging in hopes of satisfying the growing demand. But it’s important to keep in mind that these systems offer highly specialized capabilities at the exclusion of many other, equally important ones. That’s why, at the same time, we’re beginning to see solutions that aim to integrate vector search and other vector capabilities within mainstream databases. Whether through integrations or extensions, I expect that, for the lion’s share of enterprise users, these types of solutions will offer more cohesive, lightweight, and familiar solutions to their AI development needs.”
ChatGPT’s latest Updates: Safety First, Always. Commentary by Mohammad, Founder, writerbuddy.ai
“OpenAI has taken a cautious approach while introducing capabilities to understand voice and image, which is highly commendable. They are clear about the model’s limitations and have plans to avoid potential misuse of information, such as avoiding identifying human faces. They proved that innovation can also coexist with ethical considerations, as they should.”
Generative AI: Failure to Prepare is Preparing to Fail. Commentary by Kevin Campbell, CEO, Syniti
“In a tech landscape abuzz with the potential of generative AI, understanding its power and pitfalls is a must. Organizations across every vertical are looking into how to leverage this technology to get ahead or simply ensure they can keep up with the latest and greatest advancements. Rushing implementation and forgoing the necessary learning process isn’t going to be worth the investment. A thoughtful, concerted approach that is rooted in data is essential for generative AI. To truly reap the advantages of AI investments, organizations must take a close look at their data strategy and ensure they are prepared for success. If using it to solve specific business problems is the goal, it’s key to consider how the training process for those models is done and examine the underlying foundation of data that you’re supplying in order to get the result you need. Data is what trains generative AI models. To get the results most aligned to the business outcomes you’re hoping to gain, quality and governed data is the key component. The adage “garbage in, garbage out” applies here. If you have the right data sets that you want to use to train a model or gain insights, but the quality is poor, then you’re still going to end up with a bad result. Poor-quality data can lead to problems like inaccurate recommendations and irrelevant guidance. Generative AI holds immense promise for potentially transforming numerous industries. However, rushing into generative AI without a robust data strategy can lead to costly mistakes and delays. Data quality and context are the linchpins of success. Building strong data foundations and refining AI models iteratively are essential. A thoughtful approach, rooted in data, will ensure that organizations harness the full potential of generative AI, driving meaningful business outcomes.”
Successful DataOps Requires People, Process, and Technology in 2024 Now More than Ever. Commentary by Amit Patel, Senior Vice President at Consulting Solutions
“As people (data stewards) and processes (business rules, data standards) mature, technology can keep data quality on track. Processes create the foundation of a robust DataOps framework, enabling organizations to confidently navigate an increasingly complex data landscape. Technologies such as AI/RPA can identify areas where data intake and maintenance are dropping the quality—be it due to manual data entry, EDI, or other API—and can automatically rectify them using available services like Dun & Bradstreet. Such tools allow the profiling and monitoring of trends to understand anomalies that inject poor data quality and help fix them. Finally, ML tools get organizations to the next level of data maturity by detecting patterns and making predictions. This predictive power allows DataOps teams to get ahead of quality issues and ensure that data continually improves.”
A Fresh Approach: Leveraging End-User Computing Data to Enhance Business Performance. Commentary by Amitabh Sinha, CEO & Co-Founder of Workspot
“The productivity of any employee can be significantly impacted by the quality of their end user experience. There are three challenges: (i) Do you know if your users are happy? Individual applications, like Teams and Zoom, ask the question about quality of experience, but that data isn’t available to IT. Some companies have deployed Digital Experience tools to understand the overall compute experience; (ii) Can you monitor happiness continuously? User happiness can go from good to poor with one update to the operating system, driver, new security agent,…
Source link