Interview with Manuel Ujaldón Martínez: NVIDIA CUDA Fellow award

Manuel Ujaldon Martinez

Manuel Ujaldón (left) along with other Ibero-American computer experts

El Dr. Manuel Ujaldón Martínez is the first Spaniard to win an international CUDA Fellow award awarded by NVIDIA. Ujaldón has agreed to conduct an interview for our portal in which he tells us interesting information about his projects and his work, as well as his relationship with GNU Linux.

Manuel Ujaldón is a professor in the Department of Computer Architecture at the UMA (Malaga University),  author of several books and tutorials, speaker at conferences and teaches interesting courses. For all this extensive and excellent work, Manuel has won numerous awards and accolades.

LinuxAdictos: The first question is almost inevitable. Do you usually use GNU Linux? What distribution?

Manuel Ujaldon Martinez: I've always been a Linux devotee. Now I use the Linux distributions that my technicians install Department at UMA, where Ubuntu and SUSE predominate. In my early days, I chose Red Hat / Fedora.

THE: I understand that you are the first Spaniard to win the international CUDA Fellow award from NVIDIA. Three awards more (CUDA Research Center and two CUDA Teaching Centers) have fallen on the University of Malaga, where he works. First congratulate you and the UMA from our blog. How did this whole journey with CUDA start?

MU: First came the awards to the institution, in which I have served as principal investigator. And finally, the individual award. The story is summed up in that phrase by Voltaire,
"Luck is when preparation meets opportunity." In 2003, during my first stay at Ohio State University, I learned to implement scientific code on the GPU in a self-taught and artisan way, first with shaders and then with CG. In 2005 I finished the book where I documented the whole process. I only thought of passing it on to the students in my summer courses, but little later CUDA was born and everything changed. In 2008, more than 4.000 scientific articles were written on the CUDA phenomenon (in 2014 they exceeded 60.000), and I received the first recognition from Nvidia, a “Professor Partnership” for which they donated a Tesla S2050 server to the UMA with 4 high-end GPUs. He was surrounded by very good collaborators, at UMA, at Ohio State,… That talent produced all the awards that you have mentioned. You just had to pull the cart.
In 2015, there is a CUDA SDK download every 9 seconds, and the census of GPUs running CUDA exceeds 600 million. Now the awards are much more competitive, but I continue to be renewed as a CUDA Fellow four years later because Nvidia supports “early adopters” and those of us who have a passion for teaching CUDA. With more than 50 courses and seminars taught in all this time (some after flying more than 20 hours), the company appreciates my effort. And it gives me the opportunity to know inside the leading company in my research area, an invaluable experience. Moral: Without preparation, do not demand luck

THE: NVIDIA has left us linuxers bittersweet moments. You will remember that "Fuck you!" by Linus Torvalds dedicated to NVIDIA. Shortly afterwards Linus applauded that NVIDIA released the Tegra K1 drivers… What do you think is the reason for these changes in attitude?

MU: In its beginnings, Nvidia was a company designed to make cash. But in the last decade, at least in the division that I know, it has been filled with scientists from the best universities, mainly Stanford. People like Bill Dally or David Luebke know the added value of distributing knowledge and training. The profit finally arrives, but passing through there. Now there are more than 800 registered Universities that teach CUDA and which Nvidia pampers with donations, scholarships, courses, ... It is invested in the quarry, before the star signings were sought to win now! Silicon Valley companies know how to look long term, many initiatives seem like a bottomless pit, but they are seeds that germinate later. I understand that for Linus Torvalds the Nvidia of 15 years ago was Lucifer himself. And now make the odd wink.

THE: His work is contributing in the field of health. With the processing of biomedical images for the detection of regions of interest such as tumors or regenerated tissues and the analysis of degenerative diseases through computational applications. Give us an introduction to these interesting projects ...

MU: First of all, the projects are not mine, but a group that I coordinate, and that works as much or more than I do. That said, we do not invent new biomedical techniques because we are not experts in that area, we try to understand the most innovative and computationally expensive processes to accelerate them using the GPU. Techniques for detecting cancer are becoming more accurate and preventive, but require image analysis that can take months on a CPU. In a GPU the thing can stay in days and even hours, that makes the process viable. An engineer is a pragmatic type, that is the etymology of the word that identifies our union, that is what “turns” us.

THE: We have seen how computing can improve our life and how it is affecting it. But perhaps not as directly as their projects, which seem only oriented to a purely humanitarian work. I mean, their purpose is not to develop a technology that can then be used in the health field, but rather they are projects for and by health. Behind that great researcher there is also a great person ... Don't you think?

MU: More than a great person, I consider myself a sensible type. When you work in a hospital and see cancer so close, it's great to do your bit. That a patient can be diagnosed in advance by days and even weeks is magnificent even if you cannot do anything to cure him if he becomes ill. But he thinks that he is healthy, and what goes through his head every day that is pending the medical result. Shortening that ordeal supposes such satisfaction that the development of a video-game, for example, cannot bring it to me. Society has a bit stigmatized computer scientists as strange types ("freakies"), but there is everything. Working in a hospital humanizes you, you become more hedonistic, it is a great counterpoint, and more so in the world we live in, with so much unhealthy addiction ...

THE: He has continued researching bioinformatics in centers in the United States and Australia. Has no national research center or hospital been interested in your work to put it into practice?

MU: Last year the Junta de Andalucía granted me a Project of Excellence for four years to accelerate bioinformatics applications in GPUs, and in the past decade we had another similar one. In this case, we analyze neural activity to detect brain lesions. We collaborate with the Brain Dynamics company of the Andalusian Technology Park, and from there, we have access to various hospitals in the area. Hospital Clínico and Hospital Carlos Haya, both in Malaga, and Hospital Costa del Sol, in Marbella, are potential clients, and we hope they can benefit from the results of the project. For now it is premature to take stock, there are 3 years of work left, but we are sailing on the right track and the ship has its bow oriented towards Andalusian health. We hope to come to fruition. This has already happened with the previous project.

THE: Using the power of a GPU for general-purpose applications that require high computing capabilities (GPGPU) is something that seems to be in "fashion". Why do you think it took the industry so long to see that a graphics card was worth more than just video games?

MU: Every great innovation must overcome a resistance to change. Intel and AMD processors have been running x40 code for 86 years, a dire set of instructions that only holds up because the user values ​​backward compatibility. Intel has always been aware of this, but its attempts to “modernize” the x86 have been such disastrous failures that over time it has lost the will to persevere. AMD has been very complacent all this time, and in recent years, it has had a lot to do with surviving. In those, an “outsider” like Nvidia arrived, and without ambition, he is achieving it. Many of us wanted to forget about a tune out of tune, especially when we seemed condemned to listen to it daily. Now we have heavenly music, and hypnotized we open our eyes and see that the GPU is a cheap platform, versatile (that just by playing or managing the monitor, we already have amortized) and omnipresent (currently three GPUs are sold for each CPU). It's when we think, why not? And then you wake up, because learning to program in CUDA is not easy, especially if you come from Python where everything works at a high level and is done with your back to the platform. CUDA is the triumph of the hard worker, of the desire to work, of perseverance, of so many disused values, but that we need to recover. It is a miracle that it has penetrated so deeply and so quickly in our current society.

THE: You started more than 10 years ago with this, in fact in 2005 you published that book on how to program GPUs to accelerate scientific applications. Was it already an open secret?

MU: I don't think the most optimistic people would have thought then that we would get to where we are, not so soon. The GPU has an evolutionary rate much higher than the CPU, each generation is shorter and introduces more innovations. That makes the road more beautiful, but also more difficult for the visionary.

THE: In addition, initiatives such as the HSA Foundation have emerged to manage the development of HSA systems. Could you explain to other mortals the importance of heterogeneous computing?

MU: The vast majority of current processors integrate a CPU and a GPU on the same chip. The CPU is a multi-core (few complex cores, around ten) and the GPU is a many-core (many simple cores, around three thousand). Which is more powerful, ten hammers or three thousand scalpels? It depends on the problem you want to solve. But we all agree that the best is ten hammers * and * three thousand scalpels. That's heterogeneous computing: Give up nothing. Subscribe everything and then try to service 100% of the resources. To occupy the CPU, you'll need the old school: C two decades ago, Java last decade, and Python this decade. To take advantage of the GPU, you'll need CUDA this decade, and we'll see what happens next. Many codes look better on CPU, and others on GPU. If you only know how to program one of the processors, you miss duality, and you paid for it when you bought the PC. With each passing day, the programmer who does not know the GPU is more one-armed, and the company will always prefer an ambidextrous worker.

THE: Linux developers are paying special attention to ARM lately. That is because of something. This family sweeps mobile devices. But it seems to have an interest beyond low power, for example, AMD has unveiled its K12 architecture and Opteron A-Series for servers are announced. Is ARM the future? Do you think it will conquer the HPC and home computing sector by displacing AMD64, SPARC, POWER,…?

MU: More than low consumption, what ARM provides is a new model, because it does not sell you the chip, but the design plans together with the license to build it. The other characters you mention are more of a proprietary end product. It is as if one restaurant sold you a paella, and another, the recipe for you to make at home (but guaranteeing that it will turn out as good as the one in the restaurant). In the long run, if you like paella, it is better to invest in the second option, you will enjoy more and it will cost you less. Also, by selling the recipe you make more friends, because the day the paella turns out bad, the customer assumes his guilt, he cannot throw it into the restaurant. This is how ARM collects satisfied customers, and that's always a great investment. A good example is the Nvidia Tegra you mentioned earlier. They carry an ARM processor and compete with their chips in the same low-power segment where ARM is king. When Nvidia entered that market, ARM helped it by giving it a key recipe. Now, ARM makes money off the Tegra that Nvidia sells. By innovative, and by how it has implemented its ideas, ARM deserves its luck (and besides, it is a European company). I hope it continues to grow.

THE: HPC is * nix territory, more specifically Linux. One of the answers to this trend could be its open source, but so does FreeBSD and yet the quota speaks for itself. Can you justify this dominant role of Linux in HPC?

MU: For me, FreeBSD is a replacement for Linux. If you have the pure flavor, why change. And outside of the Linux world, I don't see Windows or MacOS looming over HPC. I've been following the top20.org for 500 years and they were always mere troupes. The HPC community is made up of scientists, and every piece we subscribe to has earned their credit, not just the operating system. Do you know what scientists use to write our articles? Latex. In our world, Word has a difficult market. And yet, in user computing, Word wins by a landslide.

THE: The University of Malaga is ranked 22nd in the ranking of universities that contribute the most to free software. What can you say about this position as a member of the UMA?

MU: I can say that I am surrounded by brilliant colleagues who could show off much more of their software creations. And I've never seen them hatching an economic plan to get rich. A job well done dignifies more than money.

THE: We usually end the interview with a kind of game. It consists of putting a brief personal opinion on the following terms:

MU: Open source: Work to provide intangibles, difficult to understand for those who move by economic parameters. There they, the best things in life are free.
OpenGL: The first standard for graphics programming, to which we owe so much.
OpenCL: The standard for GPGPU programming, a beautiful story that surprisingly walks towards fiasco as it does not reverse trend soon. Life is not always fair.
Arduino: The OpenGL of the hardware layer, to which we will surely also owe a lot in a few years.
Linus Torvalds: A guru. Below the best two, for me Steve Jobs and Robert Noyce, but among the 50 most influential characters in the history of technology.

I hope you liked this new interview in the series that we will be publishing. And I encourage those interested to sign up for the 11th edition of the course de GPU programming with CUDA. It is organized by Ujaldón himself and will take place in July at the UMA. In addition, it has the CUDA Teaching Center endorsement, which makes it unique in Spain.

The course is open to anyone with a minimum knowledge of programming in C. Attendees learn to program graphics cards using CUDA. They will enjoy 60 hours, mostly practical. In addition, a GeForce GTX 480 graphics card donated by NVIDIA will be raffled.


Leave a Comment

Your email address will not be published. Required fields are marked with *

*

*

  1. Responsible for the data: AB Internet Networks 2008 SL
  2. Purpose of the data: Control SPAM, comment management.
  3. Legitimation: Your consent
  4. Communication of the data: The data will not be communicated to third parties except by legal obligation.
  5. Data storage: Database hosted by Occentus Networks (EU)
  6. Rights: At any time you can limit, recover and delete your information.

  1.   saeron said

    I have been fortunate to have Manuel as a professor at the university, and without a doubt his interest in promulgating programming in cuda is immense, he deserves this recognition that has been long in coming, congratulations.