19

We've been looking into using SSDs with Oracle to speed up our test migration runs. It currently takes 12-18 hours to complete a migration run, depnding on the volume of data (we're obviously doing lots of performance tweaking too). We've a number of cheap linux boxes we're using for various runs and analysis.

The cost of SSDs direct from Dell is prohibitive. I was wondering if anyone has experience of using consumer SSDs (such as the Crucial/Micron ones).

I realise TRIM support would be an issue on Linux (using Centos). Has anyone used them on Windows 7 to counter this?

1
  • 1
    We ended up adding SSDs for indexes and tablespaces and striping the two across them. We didn't get the big speed jump we were hoping for. More like 10-15% faster for our migration runs, but in the absence of any other options that was a good timesaver (our Oracle tuning expert had already been let loose on the DB). Thanks for all the comments. We went with Crucial SSDs which offered pretty good performance for a good price and still haven't had any problems. We also accepted they'd wear out and are keeping an eye on them (and copious backups)! Thanks for all the comments. Stuart. Commented Nov 23, 2011 at 9:12

4 Answers 4

7

Here are the biggest issue(s) I see with SSDs and databases:

  • SSD Failure
    • It happens more often than I would like; often within one to two years with normal use, and faster if read from/written to heavily. What's happening when you send your redo, logs, and data files to an SSD? Lots of reads, and lots of writes. Bad combination, IMO.
  • SSD "cure-all"
    • SSDs are nice when it comes to read speed, yes. They're great to boot from for an OS, or to start programs from. But one shouldn't allow SSDs to become a fix for full-on optimization. I'm sure you aren't, since you are likely trying everything you can to make the migration happen faster, but sometimes SSDs can seem like a holy grail to avoid some of the tougher issues when it comes to optimizing. (In a lot of ways the same can be said about throwing more hardware or memory at a problem. Sometimes it is better to optimize the problem away rather than throw more hardware at it.)
  • R/W mismatch
  • Wear Leveling and Security
    • If security is any amount of concern, the wear leveling in your SSD is going to make it nigh-impossible to wipe the drive and be certain that it has been zeroed. Two, three, and more passes won't even do it, and there will always be a chance that some portion of your data will still be obtainable.
1
  • Do you still have the same opinion in 2019?
    – TrojanName
    Commented May 13, 2019 at 16:37
7

I don't see any answers to your question yet, and while I don't have any experience with using consumer grade SSD drives with a database, I thought the following question on ServerFault might be useful:

https://serverfault.com/questions/69037/configuring-sql-for-optimal-performance-ssd-or-hdd

edit: I found the following article recently and thought I'd add it to my answer. It talks about using SSDs with SQL Server, but I thought some of the factors discussed might be useful for Oracle DBAs as well.

http://technet.microsoft.com/en-us/magazine/hh334997.aspx (Reduce I/O, Increase Performance)

0
5

SSD's can make READING data faster.

Writing won't be any faster. Don't even think about placing the redo's on SSD since they are only written to. To speed up writing to the redo: add more drives and stripe them. Redo's are written sequentially so adding more spindles improves the write throughput, until you meet the controller limit.

What is that test migration doing? Does it use procedural code or does it use sets?

If using procedural code, be sure to implement bulk operations. Sets are allmost always faster.

3
  • 1
    Do you have a source for a benchmark showing inferior write speed on SSDs, particularly with the same amount of striping? My understanding was that SSDs are faster on writes too, but the difference isn't as dramatic as it is for reads. Commented Jul 5, 2011 at 14:53
  • @Leigh - That's true but the real point is that the advantage is considerably greater for random io than for sequential. I think it is fair to say that SSDs are still only for high random iops needs. Commented Jul 6, 2011 at 7:08
  • 1
    We did some testing with the f5100 cards on a M5000 system where we tried to use the flash disks as secondary cache for zfs, dedicated for files and extended sga. Reading was fast, writing slow, compared to what we did with the SAN. (some EMC box). As noted, logs are written sequentially. Disks are made for this kind of io, when striped.
    – user953
    Commented Jul 6, 2011 at 7:09
2

I have swapped my old HDD for a Crucial M4 512 MB SSD to perform test on a big Oracle database.

I run oracle 10.2 under Windows 7 in VMWare.

Performance changes are really impressive. Importing and exporting databases and SQL queries are much faster.

However, I have a strange error appearing from time to time:

ERROR 2012-06-18 18:18:14,177 : Error performing query
java.sql.SQLException: ORA-01578: ORACLE data block corrupted (file # 6, block # 1646317)
ORA-01110: data file 6: 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\DUNE\WEBDATA02.DBF'

I never had this problem with the same VM on the same machine with the HDD.

After running DBV on the file nothing is marked as corrupted.

I haven't found anything about this issue.

1
  • Don't recognise that error but I forgot to mention that imports were sped up massively by the SSDs. It was just the migration runs that only jumped 10-15% in speed. So thanks for that. Commented Dec 2, 2014 at 10:41

Not the answer you're looking for? Browse other questions tagged or ask your own question.