Categories: NewsTechnology

IBM Zurich Speeds up Big Data-Based AI Training by 10x or More

Artificial intelligence is a booming and competitive industry right now. Various companies and institutions are focusing their attention on this particular technology. Training an AI is still somewhat of a demanding task, although IBM’s Zurich research lab may have come up with a solution. Its generic AI preprocessing block can speed up learning algorithms by an order of magnitude.

Faster AI Training is a Good Thing

Most consumers are legitimately afraid of the rapid developments in the world of artificial intelligence. That is a rather natural reaction, as we have seen some astonishing breakthroughs in this area in the past few months. Even though the industry is booming, there is always a demand for better and faster AI training methods. Coming up with such solutions is not easy by any means. Thanks to IBM Zurich, however, a major breakthrough may be just around the corner.

More specifically, the research institution unveiled a new generic AI preprocessing building block. The main purpose of this building block is to improve the rate at which machine learning algorithms absorb new information. With a strong focus on big data machine learning, IBM Zurich claims its project can speed up the learning process by at least ten times. It uses mathematical duality to filter important pieces of information in a data stream and ignore everything else.

One of the downsides of big data is how there is simply too much information to go through. Even regular AIs can struggle to process all of the necessary information. There is a real information overflow when it comes to big data, and solving this problem has proven quite challenging so far. However, the new breakthrough by IBM Zurich may herald an entirely new era of big data machine learning.

Related Post

According to IBM Zurich mathematician Thomas Parnell, this is the first generic solution with a 10x speedup. Data sets relying on linear machine learning models will be among the first to benefit from this new solution. Do keep in mind the 10x speedup is only a minimum threshold, and the end results may be even better than originally projected. It all depends on which data is introduced and how the machine learning algorithm was designed.

Under the hood, this concept employs hardware accelerators for big data machine learning. Using GPUs and FPGAs in the world of AI training is nothing new, but they sometimes run out of memory to hold all data points. They are less valuable to the learning process in those instances, but they can still be utilized for big data machine learning in other ways. It is an interesting way of repurposing hardware already in use, but in a slightly different capacity from what one would normally expect.

Preprocessing every data point to see if there is a mathematical dual of a point already processed is the key to success here. If such a match is discovered, the algorithm can then skip the data point, as it is already known. This process becomes more and more frequent as more data is processed. Prior to being processed, every data point is assigned an “importance score” by measuring the size of the duality. As this value drops, it becomes less important to be re-processed, and the AI will eventually ignore it altogether.

It is evident this algorithm shows a lot of initial promise. However, there is still a lot of tweaking to be done before it can be commercialized. For now, development will continue in IBM’s Cloud, where it is known as Duality-Gap-Based Heterogeneous Learning. This is a watershed development in the world of big data processing, to say the very least. It will be interesting to see how this technology is made use of in the real world over the next few years.

JP Buntinx

JP Buntinx is a FinTech and Bitcoin enthusiast living in Belgium. His passion for finance and technology made him one of the world's leading freelance Bitcoin writers, and he aims to achieve the same level of respect in the FinTech sector.

Share
Published by
JP Buntinx

Recent Posts

The Calculated Collapse of $TG: How a “Utility” Token Was Engineered for a Rug Pull

In the unpredictable world of cryptocurrency, new tokens launch daily, each one a shining beacon…

22 hours ago

Staked Ethereum Hits Record High as Whale Accumulation Signals Bullish Long-Term Sentiment

Once more, Ethereum is commanding the spotlight as fresh figures indicate that the amount of…

22 hours ago

Arbitrum Sees Surge in Protocol Revenue and EIP-7702 Adoption Following ArbOS 40 Upgrade

The ecosystem on Arbitrum keeps flaunting its robust foundations, with a steady incline in the…

22 hours ago

Ethereum Whale Accumulation Surges as Long-Term Confidence Outweighs Short-Term Volatility

Once again, major market players are focusing on Ethereum. The whale activity surrounding the second-largest…

4 days ago

Week in AI: Fartcoin Steals the Spotlight Amid Market Turmoil

It has been a tumultuous week for the artificial intelligence sector in crypto. Sharp valuation…

5 days ago

BSC Foundation Resumes Strategic Accumulation: VIXBT, CAKE, LISTA, and MOOLAH Under Spotlight

Following a brief stint of dormancy, the BSC Foundation is back in action, reestablishing its strategic…

6 days ago