Ritu Shrivastava 2012 Analyst Day Presentation

April 1, 2012

This post is an experiment.

Below is the transcript of Ritu Shrivastava’s 2012 SanDisk Analyst Day presentation with slides.

I’m going to be out of the country for a few weeks and won’t be posting.

In a sense this post is a placeholder, but a placeholder worth pondering: Original source material on how SanDisk sees the next 10 years shaping up, technology-wise.

The plan is to follow this post with a post on ReRAM, the third prong of SanDisk’s three prong strategy, once I’m back.

**** Transcript Below ****

Jay Iyer: Thanks, Greg. So slight change in plans. We are running ahead of schedule and true to our core values, which is, one of them, which is, execute and exceed, we have done that so far. So I think we’ll cover one more presentation before taking our lunch break.

So the next presenter will be Ritu Shrivastava. He’s our Vice President of Technology Development. His responsibilities include process and device technology development in both our Milpitas, as well as our memory development facilities in Japan. He’s also a fellow of the Institute of Electrical and Electronics Engineers, so IEEE, as you may know, and has served as the CMOS Technology Editor for the journal IEEE Transactions on Electronic Devices.

Ritu has more than 30 years’ experience in the semiconductor industry in areas such as development and technology transfer of numerous generations of SRAM, DRAM, and flash memories. And he holds more than 25 patents. So with that, it is my honor to invite Ritu.

Ritu Shrivastava: Thank you, Jay, and welcome. Well, those 30 years, actually, dates me also. So over the years, I’ve worked in many different technologies, DRAM, SRAM, flash, et cetera, and I can tell you one thing. There is no exciting time like right now than has ever been before in the last 30 years of my technology development.

Just to give an example, I brought here, I’ve been in gadgets, and I’ve been in photography and videos and music, et cetera, so I can relate to enriching people’s lives that SanDisk’s mission is. So here’s a gadget’s relic from the past from mid-’85, ’84, ’85.

This is an HP-71B, which was the handheld computer, the best gadget of the time and very, very productive, a hit in Berkley. Really, that was a very big hit. Guess what memory modules it had? For some reason, I still have the box and the memory module.

It’s 4K. 4K memory, that’s all that was available when it was introduced. Please see right now what kind of memories we have. 64 gig, even 128 gig.

I paid $75 for it. So if you calculate per gigabyte, I think it probably comes to $20 million per gigabyte, okay? Flash did not exist then. Many of us used to meet at that time in Vale and non-volatile workshop trying to figure out, develop the non-volatile technologies.

Sanjay, Eli, Dan, many of the people at the forefront were there. And when the non-volatile EEprom, EEproms flash got developed, we used to wonder what will be the uses. We couldn’t find any mass volume kind of use for those technologies. And here we are. So this is, if you buy a SanDisk 64-gig flash drive, it’s probably about $1 per gigabyte, something like that. So I think I paid a little bit too much for this but this was really what I wanted at that time.

So anyway, with that, let me start my presentation. What I want to focus on is 10 years from now, how somebody else can say the same thing about today’s technology and today’s flash, et cetera.

So Sanjay already mentioned and talked about the 3-pronged approach that we have, 3-pronged strategy, which is the NAND scaling, continue NAND scaling as long as possible, work on future technologies, which are the 3D resistive RAM and the BiCS 3D NAND for us. And these will allow us to assure competitive advantage, to keep scaling the technology, to keep reducing the cost, to keep increasing the density so that we can enable many more new applications compared to even what we have right now.

So let me tell you where we are right now. This is the technology roadmap that you probably already have seen. The 24-nanometer technology is in volume production, has been in volume production. 19-nanometer technology is the workhorse for this year, 2012, and it’s doing very well in the fab, ramping up. We have been working on 1Y technology, which will be for next year. And our main goal is to be able to have technologies, which when in production, give us the smallest die size, highest density, best reliability and in time.

So 19-nanometer technology is in production. As an example, the highest density part that we have there is a 128-gigabit chip, which is an X3 3-bits-per-cell product. It is the highest density product in the world and the smallest die size in the world. That’s a very good achievement.

And earlier, you heard about vertical integration. Vertical integration allows these kinds of products, both X2 and X3, to be used in a variety of applications with very high reliability and performance. In fact, if you look at this product, it is, I’m very happy to say, it’s been accepted for presentation, publication in ISSCC, which is the premier design and technology conference, international solid-state circuit conference, and it will be presented there week after next. So for more details, you can tune into that.

Now how do we keep continuing with the scaling? So our view is that NAND scaling will keep continuing. However, there are many challenges there that we need to overcome and we’ve been working very well to overcome those challenges. In this slide here, I describe a couple of those.

Of course, the fundamental cell parameters have to be optimized, but these are the main ones that will determine how far NAND can scale.

So first one, of course, is the lithography. That is very critical. So the top right chart shows the cell X and Y dimensions. Obviously, those determine the final die size of the product, not just that, how you choose the scaling and X and Y dimensions also determines the reliability. If you keep scaling it very fast before it’s time, you might not have a reliable product, so you have to very carefully optimize what X and Y dimensions are.

The current lithography tools that we have in the fab, and those are available to anybody, the best ones are immersion lithography. And there’s a limit to X and Y dimensions to which you can scale using the existing lithography. It’s shown in the green quadrant there.

On the red side of the chart, the red quadrant, is the future lithography. That’s where you have EUV, you have different kinds of patterning, et cetera, but that gets very expensive and those technologies are not ready right now for production. So we have to scale the technologies intelligently. The cell size has to be scaled with care, so that we can have a smallest die size product with highest reliability and which is manufacturable. Publishing papers, et cetera can keep going on the red quadrant but when you talk about the actual production, that’s what we need to focus on.

The second consideration that every flash vendor has to go through is the physical limit. So in the middle picture there, I’m showing the conventional cell that is the workhorse of the industry, very much for all the manufacturers. But the tricks that we use with the process innovation, et cetera, are going to determine how much you can scale and at what point do you need to change the structure. So what I’m showing there is there is a cell-to-cell interaction that goes on and as it keeps scaling at some point, you’re not able to deposit the layer which isolates the 2 cells. At that point, the cell becomes unreliable. There’s too much interaction that goes on. And so we have to go through process innovations, which we are going through to extend the proven workhorse cell as long as possible.

The third limit is the electrical limit. When you keep scaling the cell, the number of electrons which store your information in the cell keeps reducing. So the plot on the bottom right in red shows as we go through different technology generations how the number of electrons is reducing, right? And of course, one of my, and our job functions, is to keep those electrons from getting lost, being there.

So as you see, they keep going down, and that is not good. So we have to, again, come up with process innovations where you change the structure of the process in a way that you keep the electrons as large as possible. And so there you see in the green chart, we’ve been able to do that. And that allows us to keep scaling.

So the bottom line is that there will be process innovations required. There will be, in each NAND generation technology, could be significant changes. But the infrastructure that we have in place for this conventional NAND, the more we can extend it, the better cost structure we’ll have. So solving these problems through innovations keeps our costs low, which is one of the main goals. Of course, we’ll change the cell structure when needed.

So we see that NAND scaling is going to keep going for a few more generations. And the innovations and process manufacturing technologies and the kind of vertical integration that you heard about earlier from Sanjay and others, in memory design, test, system-level solutions will allow us to extend this NAND roadmap.

And with that, we’ll keep continuing, delivering the smallest die, highest density, low cost, good reliability, et cetera.

So when we take all that into account, this is what we are projecting our roadmap will look like. And you are looking at, on the 2014, 1Z technology, 1Z NAND and maybe some beyond that.

And of course, in the meantime, we are making progress, good progress in our future technologies. Very aggressive post-NAND development work. So 1Y will be the technology for production for 2013. 1Z will come after that, and who knows how far we can keep going with that because when we will, because nobody really knows what the limits of NAND are. If you recall, I’m sure all of you know, when we were at 4x technologies, everyone was wondering, that’s the last node, then we went to 32, 24. Here, we are at 19. 19-nanometer is 190 angstroms. Gate oxides used to be 300 angstroms 15, 20 years back. Here we are in the horizontal direction with that kind of CD [commercial development?].

So nobody really knows how long NAND can keep scaling. So we have to keep trying and we have to be innovative. But we are aggressively working on the future NAND, future technologies beyond NAND, and I’d like to give a brief update on our 3D resistive RAM. So once you go beyond the electronic storage, we get into the realm of where we have to rely on material change.

So 3D resistive RAM is dependent on the resistance change of the material versus the electrons. And this approach, we believe, is the best approach for the long term. This technology, once we put in production, will keep going for a long time. However, the current promising approaches, that we have for this technology require EUV, extreme EUV lithography, which as you probably know, is not ready and still is in development.

But there are many other components to this technology that we still can work on and perfect, so that when the technology’s available for lithography, we can put this in production. And as an example, and we made good progress there. As an example, on the right chart there, you see the bit cycling yield. Cycling is when you go through, you take the material through low resistance and high resistance states and you keep cycling as a function of number of cycles. It looks very good. So we are very pleased with that. So this can provide us with production opportunities beyond 2015. And we are very excited about that.

The second technology which we’re working on, which is still a form of NAND but it’s a 3-dimensional NAND, so the NAND string here is vertical, which means that you can have a number of layers, one on top of the other. You can come up with products with extremely high densities, which are not possible by 2D NAND that we currently have.

Moreover, it utilizes the existing infrastructure. It does not rely, it does not need EUV. So you are able to take this technology, utilize the existing infrastructure and take it to production. Again, we are making progress here. We have had some good key developments on the process front. We have a 24-layer development test vehicle. By the way, for both the 3D resistive RAM that I showed earlier, that was also tested utilizing a test vehicle to look at all the process and device technology developments.

So here, we have got 24-layer structure. In the middle picture, you see the fully processed wafer. On the right-hand side, you see a picture which was taken in line. Again, please note that these are still in these process modules and technology. They’re still in development.

At the bottom, you’ll see something very interesting, which is storing 2 bits per cell. You see 4 states, distinct states that is required for the 3D RAM technology to be cost effective. And we are very pleased to see that we’re able to do that. So this could be a bridge to the 3D resistive RAM technology that I showed earlier. And if we’re able to complete this development and the timing is right then it can go into production using existing infrastructure.

Now let me change the topic a little bit. Earlier, I talked about different technologies. The question, of course, arises. Are these new technologies going to replace the applications that we have for NAND? And so here, I’m showing a spider chart, which shows 2 kinds of things. It’s kind of busy but I think you can see the black boxes with the red boundary. The attributes of technologies, so low cost per bit. One of the reasons why NAND has been so successful is because of scaling the technology and the cost reductions.

SanDisk alone in the last 20 years has reduced the cost by a factor of 50,000. 50,000. That’s quite a lot. Other technologies right now are not getting scaled like the NAND has been scaling. So low cost per bit is important. Endurance, you have the speed, and then you have the data retention.

Now different applications may require different combinations of these things. For example, if you look at the 1:00 o’clock position there, there’s an application for set-top boxes. I’m sure many of you have them. Set-top boxes keep storing data constantly. But you don’t read them that often. So the endurance requirement there has to be very high. But the data retention doesn’t need to be that high.

If you look at 10:00 o’clock position, there’s an application for navigation. I’m sure many or most of you have GPS systems. GPS requires reading all the time. There’s no writing, so why burden that application with high endurance kind of consideration or requirement? So you can trade off, one of the beauties of NAND is you can trade off performance, data retention, endurance and make it applicable to a given application.

That’s what is so powerful about NAND. So the spider chart shows you qualitatively, and actually, we have gone quantitative calculations too, of how this given technology or a given technology does against those different properties.

So if you look at the next one, which is BiCS, I showed and talked about that earlier, comes very close. In fact, in cost per bit, it’s even better. Obviously, once we start designing the systems, the circuit design architecture, you can optimize some of these things and maybe we can actually improve upon this. This one in yellow is the 3D resistive RAM. This is the reason why we think this is the technology of future, which can replace NAND. Most of the properties that you see here are actually better, can be better in 3D resistive RAM.

So this, in short, tells us that we have a very, very strong strategy, a 3-pronged strategy, which allows us continuation of scaling that we have going on right now on the NAND, push NAND as hard as we can. Obviously, we’ll have challenges but we need to solve them and use the existing infrastructure. We think that NAND will be the dominant technology for the rest of the decade.

We also think that technologies are very likely to coexist. I don’t envision where one day suddenly somebody has a very good technology and within 6 months or a year, you can replace something as strong and widespread and useful as NAND with the infrastructure that we have in the fabs, et cetera.

And that 3D resistive RAM will be the successor into the next decade. So I think we are positioned extremely well in terms of where we are, where we have to go in the short term and where we will be in the long term. And hopefully, this 4K to 64 gig someday will be multiple terabits, and we’ll all have to figure out what we’re going to use it for, like we were wondering about 20 years back. Thank you.