Jump to content

Installing a DAW to an SSD or an HD


Recommended Posts

Let me explain my situation and my acronyms.

I'm preparing to install a digital audio workstation program (in this case Sonar) to my computer but must choose whether to install it on the solid state drive or the hard drive.

I've been reading stuff and read that the best possible use of an SSD with regards to making orchestral music is to place the orchestral sample library on the SSD and put NOTHING ELSE there. The reason being that you don't want your SSD being accessed for any kinds of tasks while composing with samples because it takes away some of the "power." That makes sense to me but what about the DAW itself? It's the program that has to USE the orchestral samples in the first place so it seems like that might ALSO need to be on the SSD. It's all quite confusing to me. Unfortunately, Windows 7 came already installed on the SSD. Does anyone have any experience with problems using DAWs, samples, and solid state drives?

Thank you very much for your time.

Link to comment
Share on other sites

SSDs use flash based memory to store information. The problem with this is that each flash memory cell has a finite number of write times before it becomes unusable.

What that means is you should avoid installing anything which accesses the SSD to write data.

Your applications/executables should be installed on the same drive as your Operating System and because your Operating System updates itself frequently, I would not recommend using an SSD (even though that's the trend right now). SSDs will fail, it's not like it might fail, it will fail, it's just a matter of time--it is certain. The fewer write actions performed on an SSD, the more likely it will last a long time.

They are recommended for sample libraries because sample libraries are RARELY updated and you only access sample drives to READ, which SSDs are exceptionally fast at doing, so they're great for that.

SONAR should be installed on your OS drive, you can then designate a HDD that is separate from your OS drive as a project drive, this is recommended.

Link to comment
Share on other sites

unless your reading samples directly from disk (which some DAW's can do) instead of loading them in to ram, an ssd isn't going to matter for samples.

I have seen first hand, the speed increase you get from installing your OS on an SSD. I just install all my other programs (like my DAW) on regular drives. The speed increase on your OS drive will help EVERYTHING because all those background processes that slow down your OS (due to I/O) will not take as long to complete, and therefore will use up less resources and will spend less time waiting for the disk.

I'd suggest if you really want an SSD sample drive for orchestral stuff, get another SSD, and keep your OS on the one it came with. I need to repeat, that the biggest speed increase you can possibly get (as far as general computer speed) is putting your OS on an SSD (assuming your cpu/ram/etc are up to par).

Link to comment
Share on other sites

I've made my warning, the choice is yours.

Unless your Orchestra Samples are crap, you won't be able to load them all into RAM.

My base orchestral template is 10GB DFD (Direct from Disk), if I were to load all of that into RAM, I would have to load well over 60GB, maybe even as much as 80GB with a full session.

DFD is the only way to manage that.

Link to comment
Share on other sites

First off I just want to note that I'm not necessarily disagreeing with what you've said (except for not using an SSD for the OS anyway).

While it is true that SSD's WILL fail at some point, its not like a few months, or a year, or anything like that. Most likely it will be a good few years. Ive had an SSD as my OS drive and its still perfectly fine after 2ish years. A good idea is to keep backups, and perhaps dont leave your computer on all the time, as the os will be doing shit all the time for the most part and if you aren't there, you dont need it on all the time.

I'd also like to note that regardless of the amount of ram you have, programs are only allotted a certain amount of ram (2 GB I think, it might even be 4 GB).

Anyways, my suggestion is to keep the os on an SSD, and get another SSD for this crazy orchestral library.

Link to comment
Share on other sites

Lots of misinformation in this thread.

unless your reading samples directly from disk (which some DAW's can do) instead of loading them in to ram, an ssd isn't going to matter for samples.

Any quality sampler will have the option to read sample libraries DFD, because it would use too much RAM otherwise. Also, that's not a function of the DAW but of the plugin that's loading the samples.

I have seen first hand, the speed increase you get from installing your OS on an SSD. I just install all my other programs (like my DAW) on regular drives. The speed increase on your OS drive will help EVERYTHING because all those background processes that slow down your OS (due to I/O) will not take as long to complete, and therefore will use up less resources and will spend less time waiting for the disk.

Do you actually know what percentage of background OS CPU usage is dedicated to I/O? I don't (so I'm not making any vague claims), but to really highlight the point here, it is not true that EVERYTHING will be helped. Only processes and tasks which read to or write from the SSD will be improved. I/O to other traditional drives will of course not be improved, and anything that doesn't do I/O will not be improved. The only other way in which processes will be improved is the initial load time when a process is first read from disk, which is pretty minimal beyond initial bootup and the things that first run at login.

I'd also like to note that regardless of the amount of ram you have, programs are only allotted a certain amount of ram (2 GB I think, it might even be 4 GB).

64-bit programs can use way more than that, and 64-bit plugins in a 64-bit host (or 64-bit plugins using jBridge or something similar to run in a separate process) are only limited by the available RAM in your machine. 32-bit programs are limited to 4 GB, yes, so again, with a 32-bit DAW and without having more than 4 GB and using jBridge to load your sampler in a different process, your samples, synths, effects, recorded audio, and everything else that your DAW needs is limited to a collective 4 GB of memory. All the more reason to favour DFD even on modest sample libraries, not just "crazy orchestral" ones.

Link to comment
Share on other sites

Any quality sampler will have the option to read sample libraries DFD, because it would use too much RAM otherwise. Also, that's not a function of the DAW but of the plugin that's loading the samples.

I meant for regular sample usage there rather than crazy large sample libraries. Valid point for sure.

p.s. some daws have built in plugins that load samples, etc. So yes, it can also be a function of the DAW itself to support that for its own internal sampling junk.

Do you actually know what percentage of background OS CPU usage is dedicated to I/O? etc.

The OS is always doing background stuff. Sometime's a little sometimes a lot. I/O is THE bottle neck, and has been for years. I have first hand experience going from a traditional fast HDD to an SSD and the difference is very noticeable. Aside from quicker boot of the os and/or programs, the computer will be snappier and respond much better in general because I/O isn't so slow. Page swaps, background programs writing whatever to disk, etc. I/O is pretty much always going on, or at the very least, happens very often.

You are right to say that doing stuff on HDD's will not be faster. Sure, but that's a given.

Things that don't do I/O actually will be improved. Like I said, the computer's snappiness and overall speed will improve. Lets throw in a theoretical situation where a CPU has 4 threads, and all threads are waiting on some I/O. Well this program that doesn't do I/O can't continue because the CPU is too busy waiting for other I/O operations.

64-bit programs can use way more than that, and 64-bit plugins in a 64-bit host (or 64-bit plugins using jBridge or something similar to run in a separate process) are only limited by the available RAM in your machine. 32-bit programs are limited to 4 GB, yes, so again, with a 32-bit DAW and without having more than 4 GB and using jBridge to load your sampler in a different process, your samples, synths, effects, recorded audio, and everything else that your DAW needs is limited to a collective 4 GB of memory. All the more reason to favour DFD even on modest sample libraries, not just "crazy orchestral" ones.

As noted here http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx Virtual address space for 32-bit applications is 2GB up to 4GB with IMAGE_FILE_LARGE_ADDRESS_AWARE. 64-bit applications get 2GB up to 8GB with IMAGE_FILE_LARGE_ADDRESS_AWARE set. Pages can go up to some stupid amount (up to 128GB it looks like). But paging exists on the hard drive, and is swapped in to memory when the application needs it. So if the application is going to be doing a lot of paging, then hard drive I/O will once again be the bottle neck. So to reiterate, every 64-bit application can have up to 8 GB of memory in use at a time, the rest will be paging. (32-bit programs being 4 GB like you mentioned). I already noted that some programs open up new applications for each plugin so each application can have it's own share of the address space (or jBridge like you mentioned).

I believe the point of the virtual ram space limitation is basically to stop any one program from hogging up all the ram on the system. Can't back up that statement, but that's my theory.

Dunno if I answered everything, let me know if there is still something you believe to be misinformed.

Link to comment
Share on other sites

That's just virtual addressing. If you have enough memory then a process will not need to access virtual memory to run, and in those cases you can have a 64bit process running completely on hard ram up to crazy amounts like 128gb. I have 32gb of ram, and I use plugins that load up to 10-14gb each, in a single process with no paging.

Link to comment
Share on other sites

To clarify a couple of points on memory:

Information stored in RAM is placed in little memory cells--each of these cells has an address, like the address of your house on your street.

The problem older computers face is that there is a limit to how many different simultaneous numbers it can use as an address for your information.

This limitation is a mathematical one based around binary math. Remember, computers store information as combinations of 1s and 0s. A 32-bit Operating System is limited to a string of 32 1s or 0s. What this means is that the maximum possible combinations of 1s and 0s for a binary system as expressed in base-10 math is 2^# where the # is the bits.

So for a 32-bit operating system, this means that the maximum number of memory cells that can have addresses is 2^32 or 4,294,967,296 or (keeping in mind that each memory cell is a byte and that there are 1024 bytes in a kilobyte, 1024 kilobytes in a megabyte, and 1024 megabytes in a gigabyte) 4GB.

Conversely, a 64-bit operating system is limited to 2^64 possible addresses, a mathematical limitation that is unmatched by computer hardware (at least on the consumer level): 1.8x10^19 bytes, or more appropriately expressed as 16 Exabytes (EB, an Exabyte is a approx. billion Gigabytes)

A computer dedicated to sample based production should turn off page-filing/virtual memory. Virtual memory uses a drive as a temporary storage place for stuff that hasn't been used in RAM in a while. Page filing is certain to accelerate the Solid State Drive's failure rate because it writes to the drive all the time.

Link to comment
Share on other sites

hmm.. ok from what I'm reading, if you turn off all paging, then yes an application can use up the whole 64-bit address space range. Turning off the pagefile was never mentioned before your post and dannthr's post before it. From the way it played out, it looked like you guys were saying that just because it's 64-bit meant that it could use all the memory, which is not the case as the pagefile would need to be turned off.

With pagefile, everything I said is true. Without, you run the risk of running out of ram and hard crashing your system. With enough ram, that would not be a problem. So it's either, have a pagefile, or make damn sure you have enough ram that you'll never have it all being used up at one time.

Edit: all that being said. From what I understand, the OP's post makes it seem like this will be an all purpose computer, not necessarily strictly a production comp. I wouldn't really recommend turning off pagefiles on a comp like that. Again unless you have a godly amount of ram I guess.

Link to comment
Share on other sites

I'm going by what I remember from OS and concurrency courses in school and from what I've learned developing multithreaded programs over the last number of years. If I'm wrong on any point, I'd love to see something from a reasonable source explaining why.

I meant for regular sample usage there rather than crazy large sample libraries. Valid point for sure.

p.s. some daws have built in plugins that load samples, etc. So yes, it can also be a function of the DAW itself to support that for its own internal sampling junk.

Most DAWs that I know of still implement this as a plugin. It's a DLL that the DAW executable loads (meaning it can theoretically be a separate process), not part of the DAW executable itself. And, there's no difference between "regular sample usage" and "crazy sample libraries" as far as DFD goes; the difference is in the sampler used. Kontakt can use DFD for everything, from the piano patch that loads 500 MB of data into RAM to the synth patch that loads 3 MB.

The OS is always doing background stuff. Sometime's a little sometimes a lot. I/O is THE bottle neck, and has been for years. I have first hand experience going from a traditional fast HDD to an SSD and the difference is very noticeable. Aside from quicker boot of the os and/or programs, the computer will be snappier and respond much better in general because I/O isn't so slow. Page swaps, background programs writing whatever to disk, etc. I/O is pretty much always going on, or at the very least, happens very often.

FYI, I just got a new computer last Sunday that has the OS on a SSD. it's a little hard to compare to my old machine, which is a) a laptop versus the new one being a desktop, B) dual core while the newer one is quad core and c) each core has a faster clock speed. I'm not convinced things run noticeably faster though, besides booting up, launching programs, and programs reading their data from the disk.

Outside of paging, what other I/O tasks use the disk and run often enough to have a noticeable impact on overall performance? As far as I know, running programs are loaded into memory and not somehow streamed when needed (and besides, the main source of memory usage for nearly every program is the memory it allocates on demand, not the memory required to load the executable code and embedded static data into RAM). A running program will not access the hard drive once it has been loaded unless it contains code that explicitly performs disk I/O or unless, due to memory constraints on the system, it's paged to disk.

Things that don't do I/O actually will be improved. Like I said, the computer's snappiness and overall speed will improve. Lets throw in a theoretical situation where a CPU has 4 threads, and all threads are waiting on some I/O. Well this program that doesn't do I/O can't continue because the CPU is too busy waiting for other I/O operations.

I assume you actually mean a situation where maybe three threads are waiting on I/O and a fourth isn't doing any I/O at all. (All four threads waiting on I/O says nothing about how threads not using I/O will be improved). Regardless, this isn't true because of how operating systems schedule programs. If you've got four threads, all running at the same priority, they should get equal CPU time (unless, of course, threads are blocked or idle). If a thread chooses to use its CPU time to wait on I/O, good for it; it's not going to be allowed to use more than its fair share of the CPU, regardless of what task it's performing. So far, the non-I/O thread will NOT be any slower, and the only way for it to execute faster is for there to be fewer or no other threads running on the CPU.

Of course, OS threads may be run at a higher priority than user threads. Still, OS thread scheduling would notice that an I/O thread is blocked (if not blocked, it is doing actual work and can proceed) and would give CPU time to another thread while the I/O request was processed. It would be a really stupid OS that forced all tasks to wait when one thread was blocked for any reason at all.

Link to comment
Share on other sites

  • 2 weeks later...

Thanks a lot everyone for all of the useful information. I've already got Windows 7 installed to the SSD so I decided to install Sonar X2 to the hard disk. One article I read online said that it's not really the writing alone that destroys an SSD and that if you remove all other factors from the equation, an SSD can write 100 gigs of data per day and last roughly 10 years. Read that here.

http://www.makeuseof.com/tag/data-recovered-failed-ssd/

I learned a lot from you all. Thanks. I think the best thing for me to do is just get an external USB hard drive and have everything backed up for when the time comes and not freak out about it.

Link to comment
Share on other sites

One article I read online said that it's not really the writing alone that destroys an SSD and that if you remove all other factors from the equation, an SSD can write 100 gigs of data per day and last roughly 10 years.

That assertion makes no reference to context. How many gigs across what size SSD?

If you have a 100gb SSD and write once a day to the whole thing and it lasts 10 years, that's not that much. Write twice to one cell, and that one cell lasts 9 years while the rest lasts 10, but the SSD doesn't know that cell is bad. Write twice a day and it lasts 5 years? Write 10 times a day and it lasts 1 year?

No matter how you slice it, you are limited by the number of times you can write to an SSD--be careful, it will fail, every write brings it that much closer to failure.

Link to comment
Share on other sites

One article I read online said that it's not really the writing alone that destroys an SSD and that if you remove all other factors from the equation, an SSD can write 100 gigs of data per day and last roughly 10 years. Read that here.

Yes, let's remove the other factors so we can make our drives last longer.

GENIUS. :tomatoface:

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...