UBCD Drive clone and Resize Tests - Oct 2016

Wierd Stuff

Core Maintance Software

Customized VMs (Virtual Machines)

Manual - Customized Software Installs - May 2018

Manual - Customized Software Installs - Obsolete

Nexus Ports

The following ports are automatically enabled on all ECE Nexus and Linux computers. Note that UDP from the wireless address space (172.x.y.z) is unlikely to connect to 129.97.x.y address space due to the way the wireless access works.

PortProtocolIP RangeComment
10 000 to 10 010TCP and UDP129.97/16ECE 355
10 000 to 11 000
ON LINUX
TCP and UDP129.97/16, 172/8, 10/8ECE 355
22 222UDP129.97/16SIP Video
22 224UDP129.97/16SIP Audio
22 232UDP129.97.8/24Sip local video
22 234UDP129.97.8/24Sip local audio
4000UDP129.97/16SIP setup??
5060UDP129.97/16SIP basic audio
5061UDP129.97/16SIP messages
Echo RequestUDPall internetPing Service
UserAppSVC.exeAny129.97.56/24UserApp Security Tracking
NtTsyslog.exeAny129.97.56/24Syslogging of security and Windows Messages

Software List

The following is a list of software which I've installed.
If you have any questions contact me, Eric, by visiting me in E2-2357 or email at praetzel@uwaterloo


Special Software Installs

This software is manually installed or configured:

Software Hints

Linux Logins

zgrep -c "sshd.*Accepted password" from messages - weekly

Data 2018 to current - weekly totals - /504 for interval average
2
148
1448
6477
853
181
80
1173
829
763
409
32
134
2163
10
6
16
14
13
8
28
17
17
19
7
12
17
25
13
6
6
0
2
0
12
1
14
6
11
39
16
60
177
148
445
388
316
1257
1144
122
113
11
72
99
1322
1814
2664
3381
8928
886
848
6397
16788
4684
8011
13085
802
190
150
60
854
2070
6158
5815
4433
8462
653
4276
14127
5288
14055
4462
4170
920
336
52
68
244
2763
9807
5418
7010
3523
1623
3021
10038
10769
10928
6414
7959
2943
284
308
104
170
1261
8348
11271
18387
9509
16226
1287
1657
12852
22395
5811
10804
12478
1820
14653
3084
549
489
2626
7334
8750
9949
9918
17414
12652

Linux Software Inventory by Course (July 2020)

Unix Installs (from Sanjay June 2007)

Log Analysis

Software_Name	Software_Version	ECE_Courses	General Description

ComSol MultiPhysics: NE241 Fall - 20 users at a time, ECE 375 Winter/Fall 13 users at a time, NE454B Fall ~50 students

Silvaco TCad  ECE 433, 730  W2016, W2017

C++11 gcc 4.7+ on Linux - eceUbuntu for ECE 453 March 2015  gcc 4.1.2 is on CentOS 5/6 - far too old

AMANDA	2.5.0		
BOUML	2.29	ECE251,ECE355	
Dave's ColdFire Emulator	0.3.2	ECE354	
Data Display Debugger (ddd)	3.3.11	ECE355	
Doxygen	1.4.4		
Electric + SFS	7.00	4th Year Projects	C version of Electric VLSI CAD Package
Electric / Java	8.03	4th Year Projects	Java version of Electric VLSI CAD Package
FireFox	1.5		FireFox Web Browser
GCC for SPARC Systems	4.01	ECE251,ECE355	GCC optimized for Sun SPARC processors with Sun Forte backend code generator
GCC for ColdFire 5307	3.4.6	ECE354	GCC cross compiler
Mtx	1.2.18		Magnetic Tape eXecutive utility used to control tape drive
MySQL			
Opera	8.50		Opera Web Browser
Scons	0.96.1		
SmartMon Tools	5.33		
Sonnet 10.52		ECE 471(?)	
STAR	1.4.3		Super TAR tape backup software
Xcircuit	3.4.26	ECE241	There are drawing programs, and there are schematic capture programs. All schematic capture programs will produce output for inclusion in publications. However, these programs have different goals, and it shows. Rarely is the output of a schematic capture program really suitable for publication; often it is not even readable, or cannot be scaled. Engineers who really want to have a useful schematic drawing of a circuit usually redraw the circuit in a general drawing program, which can be both tedious and prone to introducing new errors.
NG-SPICE		ECE241	SPICE 3F5 Simulation Engine
KJ Waves		ECE241	SPICE GUI / Front End
OpenOffice	2.2		Office Productivity Application Suite

Tweaks

Linux Installs

Software to Investigate

  1. ThinLinc - easy to setup and use, Performance worse than NoMachine on high latency links. Oct 2016
  2. MOSH - looks great but no X11 support yet. It's supported in MobaXterm. Open firewall UDP 60000 to 60999 and ssh. Oct 2016
  3. MobaXterm X11 client for Windows. It works with Altera Quartus, is as expensive as NoMachine when licensed but has a free "personal" version.
  4. Mono - Ximian's open-source C# implementation
  5. IBM "Rational" software. MS VC 6.0 is needed , but it does not have to be preinstalled, I believe. PATD has to be set so that VC is accessible from Rose Real Time.

Nexus Servers

Linux Problems

Nexus (Windows 2000, XP, Win7, Win10) Problems

Nexus (Windows 2000, XP) Q: disk

Nexus (Windows 2000, XP)

Policies

  • Win 10 Background image at login set with policy for all labs using the old lab. pictures but without the text help info that we used to have on the login screen - March 2019
  • Misc

  • Login message at boot time - set a User Config, Windows Settings, Script (logon) along the lines of:
    msg.exe %USERNAME% "%USERNAME% - a gentle reminder, 'food (eating) or refreshments (drinking) are not allowed in the Engineering Labs'."
    goto end
    
    :end
    

    Memory Use

  • UserAppSVC - 1.4M memory used
  • Firefox 1.5.0.7 70M memory used
  • Thunderbird 1.5.0.7 46M memory used
  • Norton NAV,SAV 17M memory used
  • Putty 500k + 3M per window memory used
  • Windows Media Player 9.0 6.7M memory used
  • StarOffice (OpenOffice) quickstart 8.3M memory used, 22.3M with blank text document open
  • Nexus Tuque Installs

    Note - to find the uninstaller look under the registry in: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall

    waitfor32.exe is on the Tuque share for adding a time-out for installs
    Ie waitfor32.exe -f200 sets a 200 minute timeout

    Nexus GPO MSI Software Installs - Testing

    Installed


    Polaris

    Note: In order to use much of this installed software you will have to run Q:\eng\ece\utl\menu.bat and, probably, reboot. If it isn't in the path run it from Q:\eng\ece\utl\menu.bat
    ie Xilinx, CVS, Visual Prolog, PSpice 8, MaxPlus

    Notes


    Student Photo System


    ECE 324 Computers


    ECE 222 Computers


    E2-3344 Windows 7 Setup

    May 2012

    E2-3344


    Standalone Win 98


    Special Computers


    SSD Speed - AHCI vs IDE - Windows 7 - July 2015

    The following are performance tests using Samsung Magician software and Samsung EVO 840 and 850 250G SSDs with an Asus P8H77-M motherboard, 8G of RAM

    Being tested was different BIOS settings: IDE or AHCI configuration and 3GB/s or 6GB/s SATA connection.

    In Win 7 it's critical that two services be set to auto-start at boot time or changing from IDE to AHCI will bluescreen the machine. Change the start variable from the default of 3 to 0 for these two keys:
    HKLM\System\CurrentControlSet\Service\msahci
    HKLM\System\CurrentControlSet\Service\iastorV

    Windows 7 performance tests, with a HD, were done with a 2012 vintage 500G WD hard drive. Linux tests, with a HD, were doing using a 2008 vintage 160G WD drive.

    SSD Model,
    Computer
    Conditions Samsung SSD Magician Perf. Test Windows 7
    HD Perf Test
    Sequential IOPS
    Speed IDE or AHCI Read Write Read Write
    Samsung EVO 850 250G, testbed 3GB/sec IDE 212 236 8551 13712 -
    6GB/sec AHCI 438 486 63898 62946 -
    WD 500G 7200rpm, testbed 3GB/sec IDE 98 100 208 410 5.9
    6GB/sec AHCI 109 105 448 397 5.9
    EVO 840, public06 6GB/sec AHCI 550 241 62207 ** 4019 ** 7.9
    EVO 850, public10 3GB/sec IDE 202 226 8276 13605 7.9
    6GB/sec AHCI 436 487 62040 62074
    EVO 850, testbed 3GB/sec, Win updates installing IDE208 222 6398 13511
    3GB/sec IDE 211 227 7903 13626 7.3
    6GB/sec IDE 309 371 8943 16635 7.7
    3GB/sec AHCI 284 269 50446 43355
    6GB/sec AHCI 550 469 60348 65448 7.9
    WD 500G HD
    Testbed & lab computers
    3GB/sec IDE 5.9
    6GB/sec IDE 5.9
    6GB/sec AHCI 5.9

    Bonnie++ 1.03 test results for Linux using a 2008 vintage 160G WD HD first at 3GB/sec with IDE and the 2nd at 6GB/sec with AHCI

    Server Setup SizeSequential OutputSequential InputRandom Seeks
    Per Char
    K/sec
    %CPPer Block
    K/sec
    %CPRewrite
    K/Sec
    %CPPer Chr
    K/sec
    %CPBlock%CPRandom Seeks
    /sec
    %CPFiles
    Linux Testbed
    2008 vintage 160G WD HD
    IDE, 3GBs 15720M9921582103960947025498752901287127203.7016
    AHCI, 6GBs 15720M10576187103348846492492428831189314192.0016
    Linux Testbed
    250G EVO 850 SSD
    IMPROVED
    IDE, 3GBs 15720M11585497250008231016611111078699243710159787.43016
    AHCI, 6GBs 15720M1131499530814926182605151134679965640430++++++++16
    Linux eceSVN
    DDR2, M3A785T-M, 2.5GHz
    160G WD HD
    IDE, 3GBs15G7277798109252434444321456919213689825255.9116
    AHCI, 3GBs 15G 70148 95 108934 40 43686 20 43949 92 138138 25 280.0 1 16
    Linux testBed, 1TB WD 10k rpm Velociraptor IDE, 3GBs 15720M 108624 90 196638 17 91043 10 104320 94 232484 14 413.4 2 16
    AHCI, 6GBs 15720M 113884 95 195132 17 84157 8 105111 93 215252 10 586.9 1 16
    eceUbuntu-opt-directory IDE, 3GBs 47360M 91171 99 119559 22 68295 14 91713 98 262224 31 470.2 2 16 25134 80
    
    

    Harddrive Speed (RedHat)

    hdparm -c1 (32-bit xfer) -d1 (DMA) enabled unless otherwise stated.

    Speed tests are read tests hdparm -t and -T in square brackets []


    Historical Computer Purchases

    Monitor Purchases

    Network Equipment

    Operating Systems - ECE

    Operating Systems - Man. Sci.

    UWDir update

    For the university
    
             http://ego.uwaterloo.ca
    
    First - use the 3nd link to set it to use your Polaris/Nexus password
    using the EngMail server.
    
             http://ego/~uwdir/UW-SignOn.html
    
    Then click on the link to "Update Your UWdir Data".
    
             https://ego/~uwdir/Update
    

    Trivia

    2004 inventory

    OS Boot Times

    It's that time of the century - Win XP is dead and one option is to jump older desktops and laptops to Linux!

    Here is a comparison of some : Ubuntu flavours

    Test drive it by using a bootable USB key using the: Universal USB Installer

    This is for a 3.4GHz AMD 2-core with 4G of RAM on an Asus M4A785T-M motherboard with 500G HD done March 2014.

    Operating Sys. Boot Time
    Sec
    Total Boot to Login
    Sec
    Hibernate
    Sec
    Restore
    Sec
    Windows XP Pro x86 44 72 20 17
    XUbuntu 13.04 - 46 <5 <5
    Linux Optimizations

    19 June 2018 - CentOS 6 file server had to enable SMB 2 so that eceServ1 would work with Chorus machines (Win 10 1709) by editing smb.conf and adding: max protocol = SMB2 for Samba 4.6

    Nov 2017 - CentOS 7 eceWeb & eceAdmin can't send email to @gmail but on-campus works.

    yum install postfix
    rpm --erase ssmtp
    chkconfig postfix on   and service start postfix
    

    Joining machines to domains. Assuming the local passwd file is used for UID/GID. Edit the smb.conf and:

    https://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/idmapper.html
    
    eceWeb  - edit samba.conf
            security = ADS
            realm=nexus.uwaterloo.ca
    
     net ads join -U'__Domain_Account___'
     Enter __Domain_Account___'s password:
    Using short domain name -- NEXUS
    Joined 'ECEWEB' to dns domain 'NEXUS.UWATERLOO.CA'
    DNS Update for eceweb.uwaterloo.ca failed: ERROR_DNS_GSS_ERROR
    DNS update failed!
    
    [root ~]# net ads testjoin
    Join is OK
    

    Here is a simple script to track recent snapshots on FreeNAS or ZFS. I just cron it daily so it sends me an email and if I don't see the snapshots or see 0 size for the previous day something is wrong ...

    YESTERDAY=`date +%Y%m%d --date yesterday`
    TODAY=`date +%Y%m%d`
    echo Yesterday was $YESTERDAY and today is $TODAY
    
    ssh _server_ zfs list -t snapshot | grep $YESTERDAY
    ssh _server_ zfs list -t snapshot | grep $TODAY
    

    HDParam tests (cache Read, drive read) -c1 -d1

    UPS Inventory

    Unless otherwise stated all UPS's are APC Smart UPS

    Some UPS & IT Events

    14-Mar-2019 - reboot eceTesla3 - GPU driver became unresponsive, process issues

    14-Mar-2019 - reboot eceTesla2 - load 300, nvidia-smi was unresponsive

    ~7-Mar-2019 - reboot eceTesla0 - process spawning issues, kernel upate


    Nov 19 - e2-sw-2403a-a lost it's uplink, spanning tree issue

    Sept 17 - e2-sw-2403a-a wasn't working for ~20 ports - no link lights at all, re-seated management board

    May 4 - e2-sw-2403a-a installed

    DC POP down (it's Cisco?) - Sept 28 to 31

    2010? - APC 3000VA UPS blew out it's electronics seconds after a power outage.

    Apr 2009 - CPH-1333A battery failure resulted in UPS continually power cycling.

    Mar 2009 - severe battery failure (puffing, venting, hot) on 1400VA in E2-2361 on Unix rack

    Dec 2008 - Major failure of 3000VA UPS in E2-2361 on Unix rack

    2006? - minor battery failure on 1400VA UPS in E2-3340 resulting in UPS turning off and not failing onto mains

    UPS Batteries

    Batteries only seem to last 2 to 3 years. This is signif. shorter than the 20+ years I was used to at a commerical UPS mfg. Visually batteries indicate overcharing (puffing, cracking) and some failures involve the battery going high-Z.
    Measuring voltage on a pair of 3000VA APCs UPS the battery to a battery pair was 26.2V and 27.4V so overvoltage charging could only happen due to a difference in battery capacity and/or impediance- resulting in the lower capacity/higher-Z battery being overcharged.

    Replacement Batteries

    Projects

    WOL

    WOL works with Wake-up on Windows with M2N, M3 Asus AMD MBs. ether-wake on Linux (net-tools rpm)

    Websites with useful info:
    Good backgrounder: http://www.dslreports.com/faq/wol?text=1
    Linux source code: http://ahh.sourceforge.net/wol/
    Great site listing tons of packages to use: http://gsd.di.uminho.pt/jpo/software/wakeonlan/mini-howto/wol-mini-howto-3.html
    Magic UDP packet port 9 (almost all)
    AOpen M/B supports WOL with PCI card using connector
    ASUS P3 require PCI eth nic support WOL - none of ours do
    P4P-800VM apparently supports WOL but I've not gotten it to work.

    Special Software - Windows

    HD Specs

    Special Equipment

    Software Tools for Windows

    Securing Information

    FYDP Computer Demand

    SVN Performance Evaluation

    In 2018 performance issues with eceSVN were raised. Here are some performance results using
    svn co http://ecesvn[-new].uwaterloo.ca/courses/ece108

    The virtual machine servers have Intel i5-6500 CPUs with a 10Gb/s connection back to the file server holding the VM and 10Gb/s back to the file server with the SVN repos and was running 4 VMs, each with a single core assigned.

    My guess for the poor performance of the 2-core VM is that the KVM server is running CentOS and I've seen 2x worse performance on CentOS than Ubuntu when many cores / threads are in use. It seems as if the older kernels in CentOS do not handle new CPUs and many cores well.

    KVM Server Server OS Specs Runtime
    N.A. eceSVN CentOS 6 32G RAM, 4-core i5-3300 46 min, 44 min
    CentOS 7 10Gb/seceSVN-new Ubuntu 18.04 LTS 1-core of i5-6500, 4G (swap in use) 15 min
    1-core of i5-6500, 8G 10 min
    2-core of i5-6500, 8G 25, 25, 11:15 min
    3-core of i5-6500, 8G 12 min
    Ubuntu 18.04 1Gb/s 1-core of i7-8700, 8G 9 min
    2-core of i7-8700, 8G 9:15 min
    3-core of i7-8700, 8G 10 min

    Raspbian Build Performance Evaluation

    Nov 18, 2014 - This was a test to evaluate the space and computing requirements to build Raspbian for the Raspberry PI.

    Kernel code is 837M clean and 1.2G when built. ECELinux5 is running mprime in the background niced.

    source /opt/bin/setup-raspbian.csh
    make clean; make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- bcmrpi_defconfig
    date ; make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- -j 6 ; date

    Some rough numbers for performance comparison:

    -j 3
    Server, Specs Location Build Ops Build Time
    Linux5, x4 i5 tmp -j 6 6:33 : 9:58:16 10:04:49
    Linux5, x4 i5 home SSD RAID 10 -j 6 9:27 10:25
    Linux5, x4 i5 home SAS RAID 10 -j 6 12:27 : 11:27:34 11:40:01
    Linux5, x4 i5, 1SSD on server, sustains 14Mb/sec to file server home -j 6 11:45 : 7:48:57 8:00:42
    Linux5, x4 i5, 1SSD on server home -j 6 13:10 : 7:32:33 7:45:42
    Linux1, x8 AMD FX tmp -j 12 6:59 : 10:07:30 10:14:29
    Linux1, x8 AMD FX tmp -j 6 7:01 : 10:46:06 10:53:07
    Linux1, x8 AMD FX home, 6 x SAS -j 12 9:37 : 11:11:50 11:21:27
    Linux2, x8 AMD FX home RAID 10 SSD -j 12 8:01 6:48 7:25
    Linux1, x8 AMD FX, 20 Mb/sec to file server home, 5 x SAS, 1 SSD -j 12 9:36 : 8:05:37 8:15:13
    Linux6, x2 AMD "550" tmp -j 3 16:57 : 9:31:40 9:48:37
    Linux5, x4 i5 home 23:57 : 6:18:20 6:41:17
    Linux5, x4 i5 home 67:12 : 11:44:38 12:51:50
    Linux1, x8 AMD FX home, 6 x SAS 2:13:15 : 12:59:55 14:13:10
    Raspberry PI A, 256M, 700MHz SD card 16:15:00 : ~1pm 5:15:14am

    Antec ISK 110 case Power Supply efficiency

    Using i3-8100 with 1x16G 2400MHz RAM, M.2 500G 860 EVO, ROG Strix H370-I Gaming MB:

    Power Supply BIOS power Win 10 power
    Antec ISK 110 P/S 29.1W 14.2W
    Antec MT 352 80Plus Bronze 33.8W 19.0W

    GPU Processing Performance - ECE 459 - March 2018

    This is to collect the performance info to inform purchasing decisions. Power draw and run times are for the ECE459 nbody assignment with 5,000 * 64 points.

    ModelPerfMax.
    Power
    Idle & compute
    Power
    nbody
    min:sec
    Cost
    Quadro 600 GF108 ?GF/s ?W ? 1G GDDR3, 128-bit bus, 96 cuda cores
    GTX 950 1.6GF/s 90W 5:00 ?
    Tesla M2090 1.3GF/s 225W 74W / 109W 3:15 $200 used Jan 2018
    GTX 1060 3.9GF/s 120W 25W / 50W 2:18, 2:16 $300+
    Tesla P4 5GF/s 70W 24W / 33W 1:50 est. $2,500
    GTX 1070 5.8GF/s 150W 20W / 75W 1:35 $300 used, $600+
    RTX 2070 -GF/s 215W ? / 90W 1:21, 1:21 $ ?
    GTX 1080 8.2GF/s 180W 1:08 $750+
    Titan XP 11GF/s 250W 0:45 $1,200

    GPU Processing Performance Evaluation - ECE 459 - March 2018

    This test runs the ECE 459 nbody test from Assignment 2 with 5,000 * 64 points. Run time on various GPUs is below
    GPU run time Misc. Specs
    Nvidia Quadro 600 GF108 (runs at 76C) 28:24 Ubuntu 18.04, i3-6100
    Nvidia GTX 950 5:06, 5:02, 5:01 Ubuntu 16.04 or CentOS 7.4, Ryzen7 1700
    Nvidia Tesla M2090 3:16, 3:13 Ubuntu 16.04, i3-4170
    3:14, 3:13 Ubuntu 18.04, i3-6100
    Nvidia GTX 1060-6G 2:16, 2:17, 2:17 Ubuntu 18.04, NVidia driver 390.87
    2:18, 2:18 Ubuntu 18.04, NVidia driver 415.13
    Nvidia Tesla P4 1:57, 1:50, 1:55 CentOS 7.4, Xeon Gold 5120 14-core
    Gigabyte GTX 1070 1:34, 1:35, 1:36 Ubuntu 18.04 LTS, i7-7700K
    GeForce GTX 1070 FTW 1:30, 1:31, 1:31 Ubuntu 18.04 LTS
    PNY XLR8 GTX 1070 FTW 1:27, 1:27, 1:27 Ubuntu 18.04 LTS
    Nvidia GTX 1080 1:08, 1:09, 1:08 Ubuntu 17.10
    1:09, 1:08 Ubuntu 18.04
    Nvidia Titan XP 0:48, 0:48, 0:47 Ubuntu 17.10, i7-8700K 6-core, DUAL GPU only 1 used
    0:47, 0:47, 0:44 Ubuntu 16.04, i7-8700K 6-core, DUAL GPU only 1 used

    GPU Processing Performance Evaluation - Caffe mnist example - March 2018

    This test runs the caffe mnist example. Run time on various GPUs is below
    GPU run time Misc. Specs
    Titan Xp 1:19 Ubuntu 16.04, Dual GPU in machine, only one used
    1:42, 1:43, 1:43 Ubuntu 17.10, Dual GPU in machine, only one used
    GTX 1080 1:10, 1:09 Ubuntu 18.04
    1:37, 1:39 Ubuntu 17.10
    GTX 1060-6G 1:14, 1:16 Ubuntu 18.04
    GTX 950 1:54 Ubuntu 16.04
    2:33 Ubuntu 17.10
    1:51, 1:51 Ubuntu 18.04

    ECE 493 Quantum QED simulation using Python qutip - Feb 2019

    Below are the runtimes for the A-3.py file from Chris_Warren's Masters thesis (python3 A-3.py) The OS is Ubuntu 18.04 unless otherwise stated.

    CPU SpecMark Runtime (sec)
    single/multi Part 1 part 2
    i7-7700k : June 2019 2585 / 12055 109 6.0
    i7-7700k 109 6.3
    i7-8700 CentOS 7 : June 2019 2705 / 15995 110 5.2
    Ryzen 7-1700 : June 2019 1775 / 13750 113 9.4
    Ryzen 7-1700 130 -
    i7-6700 2156 / 10011 125 6.7
    i7-6700 : June 2019 124 6.8
    i5-6500 1945 / 7235 123 7.2
    i5-6500 : June 2019 126 7.5
    i7-3770 2068 / 9284 153 13.4
    i7-3770 : June 2019 154 13.1
    Xeon Gold 5120 : June 2019 1725 / 18145 168 7.9
    Xeon Gold 5120 163 8.0

    Parallel Processing Performance Evaluation - ECE 459 - Jan 2018

    For this test the queens problem makes use of all CPU threads. I did not test de-activating hyperthreading.
    For Ubuntu 17.10 the program was recompiled and may explain the faster runtimes.


    I ran queens_omp 16 and run times were:
    CPU run time Specs Passmark Test (single / all threads)
    AMD EPYC 7302P ? 16-cores, 3.3GHz, Ubuntu 18.04 LTS 2248 / 30994
    AMD Ryzen 2700X 0:43, 0:42, 0:43 8-cores, 3.7GHz, Ubuntu 18.04 LTS 2193 / 16985
    Xeon Gold 5120 0:42, 0:43, 0:42 14-cores, 2.2GHz,CPU ~$3000, Ubuntu 18.04 LTS 1725 / 18145
    Threadripper 1920X 0:55, 0:57, 0:56 3.5GHz, 12-cores, 195W, CPU ~$700, Ubuntu 16.04 LTS 1978 / 18285
    Ryzen7-1700 0:52, 0:52 8-cores, 3.0GHz, 4 of 2400MHz RAM, Ubuntu 18.04 with 4.15.0-42 kernel, Dec 16, 2018 1775 / 13750
    0:51, 0:51, 0:55 8-cores, 3.0GHz, 4 of 2133MHz RAM, Ubuntu 18.04 with 4.15.0-20 kernel, May 2, 2018
    0:58, 0:57, 0:57 (118W power draw) 8-cores, 3.0GHz, 4 of 2133MHz RAM, Ubuntu 17.10 with 4.13.0-37 kernel
    i7-8700K 0:60, 0:55, 0:55 6-cores, 3.7GHz, Ubuntu 17.10, 8G DDR4-2666 * 2, 2 of Titan GX GPUs 2705 / 15995
    i7-8700 0:52, 0:51, 0:52 6-cores, 3.2 to 4.6GHz, Ubuntu 18.04, 8G * 2, 109W running 2779 / 15154
    i5-8400 0:63, 0:63, 0:64 6-cores, 1x16G RAM, 2.8 to 4.0GHz, Ubuntu 18.04 LTS, 4.15.0 kernel 2335 / 11745
    i7-7700K 1:07, 1:07 4-cores, 4GHz, Ubuntu 18.04 LTS Dec 2018 2585 / 12055
    1:13, 1:16, 1:08, 1:08
    i7-6700 1:17, 1:17, 1:18, 1:18 4-cores, 3.4GHz, Ubuntu 18.04 LTS June 2018 2156 / 10011
    1:20, 1:19 Dec 2018
    i3-8100 1:35, 1:35 4-cores, 1x16G RAM, 3.6GHz, Ubuntu 18.04 LTS, 4.15.0 kernel 2105 / 8090
    1:33, 1:36, 1:34 2x16G RAM
    i7-3770 1:34, 1:34 4-cores, 3.4GHz, Ubuntu 18.04.02 2068 / 9284
    Xeon Gold 5120 1:35, 1:38 14-cores, 2.2GHz,CPU ~$3000, CentOS 7.4 1725 / 18145
    i5-6500 1:37, 1:37, 1:36, 1:35 4-cores, 4x16G RAM, Ubuntu 18.04 LTS, 4.15.0 kernel 1945 / 7235
    Ryzen3-2200G APU 1:46, 1:47, 1:49, 1:49, 1:47 4-core, 4-thread, 3.5GHz, Ubuntu 18.04 LTS 4.13.0-45 kernel, 33.6W in Ubuntu idle, 83.2W running nqueens_omp 1820 / 7355
    nuc7i7 i7-7567U 2:25, 2:25, 2:24 2-cores, 2x8G RAM, Ubuntu 18.04 LTS 2264 / 6497
    i3-6100 3:11, 2:35, 2:39, 2:38 2-cores, 4x16G RAM, Ubuntu 18.04 LTS, 4.15.0 kernel 2110 / 5495
    AMD Phenom II x6 1090T 2:41, 2:39, 2:42 6-cores, 3.2GHz, Ubuntu 18.04, July 2018 1220 / 5595
    i3-4130 2:50, 2:53 2-cores, 3.4GHz, Ubuntu 18.04 LTS, 4.15.0-23 kernel, July 2018 1963 / 4793
    i7-8700K 3:18, 3:32 6-cores, 3.7GHz, Ubuntu 16.04, 4.10 kernel, 8G DDR4-2666 * 2, 2 of Titan GX GPUs, 80.5W idle, 156W running 2705 / 15995
    i7-8700 3:24, 3:20 6-cores, 3.2 to 4.6GHz, CentOS 7.4, 8G * 2, 68.5W running 2779 / 15154
    i3-3220 3:31, 3:28, 3:30 2-cores, 3.3GHz, Ubuntu 18.04 LTS, 4.15.0-23 kernel, July 2018 1760 / 4233
    i7-7700K 3:34, 3:41 4-cores, 4GHz, CentOS 7.4, Jan 2018 2583 / 12055
    3:43, 3:31, 3:47 4-cores, 4GHz, CentOS 7.5, July 2018
    AMD FX-8350 3:31, 3:26, 3:20 8-cores, 4.0GHz, Ubuntu 18.04, July 2018 1510 / 8950
    Ryzen7-1700, 2 of 2666MHz RAM 3:45, 3:51 (89.7W power draw) 8-cores, 3.0GHz, CentOS 7.4 with 3.10.0-693 kernel 1775 / 13750
    Ryzen7-1700, 4 of 2133MHz RAM 4:00 (only 83W power draw) 8-cores, 3.0GHz, CentOS 7.4 with 3.10.0-693 kernel
    Ryzen7-1700, 2 of 2666MHz RAM 4:01, 4:01 (88.5 power draw) 8-cores, 3.0GHz, CentOS 7.4 with 4.15.2 kernel
    Ryzen7-1700, 4 of 2133MHz RAM 4:03, 4:03 (only 83W power draw but 125W running CPU burn-in) 8-cores, 3.0GHz, CentOS 7.4 with 4.15.2 kernel
    Ryzen7-1700, 4 of 2133MHz RAM 4:16, 4:09 (only 83W power draw but 105W in BIOS) 8-cores, 3.0GHz, Ubuntu 16.04 LTS, 4.10 kernel
    Ryzen7-1700, 2 of 2666MHz RAM 4:21, 4:16, 4:09, 4:16, 4:06 (90.5W power draw) 8-cores, 16-thread, 3.0GHz, Ubuntu 16.04 LTS, 4.10 kernel
    Xeon E5410 16G RAM 4:50, 4:47, 4:51 4-cores, 8-thread, 2.33GHz, Ubuntu 17.10 1000 / 3268
    Ryzen7-1700 no threading, 4 of 2133MHz RAM 5:59, 6:11 (power draw ~85W) 8-cores, 3.0GHz, Ubuntu 16.04 LTS, 4.10 kernel
    i7-6700 4:30, 4:30, 4:07 4-cores, 3.4GHz, CentOS 7.5, July 2018
    i7-3770 4:21, 4:08 4-cores, 3.4GHz, CentOS 7.4 2068 / 9284
    Ryzen3-2200G APU 5:04, 4:37 (nqueens_omp 15: 53, 42 sec) 4-core, 4-thread, 3.5GHz, Ubuntu 16.04 LTS, 4.10 kernel, ~74W power draw
    i5-8400 5:13, 5:01, 6:09, 4:59 6-cores, 2.8 to 4.0GHz, CentOS 7.5, 3.10 kernel
    5:12, 5:34 6-cores, 2.8 to 4.0GHz, CentOS 7.5, 4.17 ML kernel
    Ryzen3-2200G APU 5:23, 5:55 4-core, 4-thread, 3.5GHz, CentOS 7.4 3.10.0 kernel, ~70W power draw
    i5-3450 6:58, 6:46 4-cores, 4x16G RAM, 3.1 to 3.5GHz, CentOS 7.5 1856 / 6520
    i3-6100 7:16, 7:09, 7:34 2-cores, 4x16G RAM, CentOS 7.5, 3.10.0-862 kernel
    i3-8100 7:17, 7:17, 7:07 4-cores, 3.6GHz, CentOS 7.5, 3.10 kernel
    i3-4170 7:22, 7:38 2-cores, 3.7GHz, Ubuntu 16.04 LTS, 4.10 kernel
    7:27, 7:34 compiled on Ubuntu 16.10, 2-cores, 3.7GHz, Ubuntu 16.04 LTS, 4.10 kernel
    7:32, 7:33 2-cores, 3.7GHz, CentOS 7.5 3.10 kernel
    i5-6500 8:17, 8:33 Other SW running 4-cores, 4x16G RAM, CentOS 7.5, 3.10.0-862 kernel
    i3-4170 Hyper-Threading disabled 11:04, 10:31 2-cores, 3.7GHz, Ubuntu 16.04 LTS, 4.10 kernel

    Intel Kaby Lake i7-7700K Performance Evaluation - April 2017

    An identical system had it's CPU switched from i7-6770 to i7-7700K. The 7700K had a 25% higher clock but that was not exploited until the CentOS 7 kernel was updated to 4.10.10 from the stock 3.10. For performance testing a simple Quartus 15.0 project was compiled (1, 2, 4 at a time).

    Summary:

    Server specifications:

    Server, Specs Networking File Source Num. Compiles Build Time
    eceLinux2 i7-6770 kernel 3.10 1Gb/s eceServ1 1 80, 79, 79 Reference
    2 1:26
    4 1:56, 1:45
    eceLinux2 i7-6770 kernel 4.10.10 1Gb/s eceServ1 1 49, 44 44% faster
    2 48, 46, 46
    4 58, 59
    eceLinux2 i7-6770 kernel 4.10.10 1Gb/s or 10Gb/s local SSD 1 37, 37, 37, 37, 37 50% faster
    2 37, 38, 37, 39, 39
    4 49, 51, 53, 50, 51
    eceLinux1 i7-6770 kernel 4.10.1 1Gb/s eceServ1 1 47, 45, 45 43% faster
    2 46, 45
    4 58, 63, 58
    8 66, 55
    eceLinux1 i7-6770 kernel 4.10.1 10Gb/s eceServ1NEW 1 50, 45, 48 ?% faster
    2 47, 47, 46
    4 58, 62, 61
    eceLinux1 i7-7700K kernel 4.10.1 1Gb/s eceServ1 1 60, 52, 49, 50 SLOWER
    2 51, 61, 51
    4 70, 68
    eceLinux1 i7-7700K kernel 4.10.10 10Gb/s eceServ2 1 45, 44 44% faster
    2 49, 48
    4 56, 55
    eceLinux1 i7-7700K kernel 4.10.10 10Gb/s eceKVMserv 1 40, 40, 39 50% faster
    2 40, 40, 39, 40
    4 45, 46, 46, 45
    8 1:21
    eceLinux1 i7-7700K kernel 4.10.10 10Gb/s local SSD 1 33, 33, 33 58% faster
    2 33, 33, 33
    4 42, 43
    eceLinux3 i7-3770 kernel 3.10.0 1Gb/s eceServ1 1 58, 53, 55, 1:01, 55, 56 ?
    2 56, 55, 55, 1:08, 57, 1:06
    4 1:08, 1:07, 1:07, 1:10, 1:09, 1:06
    eceLinux3 i7-3770 kernel 4.10.10 1Gb/s eceServ1 1 1:51, 51 Unpredictable
    2 2:51, 50, 2:50, 1:50, 51
    eceLinux3 i7-3770 kernel 3.10.0 1Gb/s local SSD 1 46, 46 ?
    2 51, 47, 48
    4 1:01, 1:02, 1:02
    eceLinux9 VM 1-core i5-6500 kernel 3.10.0, 8G RAM 1Gb/s eceServ1 1 1:06, 1:13, 1:07 ?
    2 1:51, 1:53
    eceLinux9 VM 1-core i5-6500 kernel 4.10.10, 8G RAM 1Gb/s eceServ1 1 1:14, 1:01, 1:00, 59 ?
    2 1:45
    eceLinux9 VM 2-core i5-6500 kernel 4.10.10, 8G RAM 1Gb/s eceServ1 1 59, 59, 58 ?
    eceLinux9 VM 1-core i5-6500 kernel 4.10.10, 8G RAM 10Gb/s eceServ1NEW 1 1:11, 1:02 ?
    2 1:51, 1:52, 1:50
    eceLinux5 i5-8400 July 2018 1Gb/s eceServ1 1 1:04, 0:53
    2 1:01, 1:03
    4 ?

    FIO tests to various file servers

    fio --rw=randread --bs=64k --numjobs=4 --iodepth=8 --runtime=30 --time_based --loops=1 --ioengine=libaio --direct=1 --invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap --exitall --name task1 --filename=/home-fast/1.txt --size=10000000
    
    eceKVMserv:  READ: io=3732.8MB, aggrb=127349KB/s, minb=31801KB/s, maxb=31897KB/s, mint=30009msec, maxt=30014msec
    eceServ2:    READ: io=33608MB, aggrb=1120.2MB/s, minb=286662KB/s, maxb=286981KB/s, mint=30001msec, maxt=30002msec
    eceServ1:    READ: io=3325.8MB, aggrb=113444KB/s, minb=28348KB/s, maxb=28387KB/s, mint=30008msec, maxt=30019msec
    eceServ1NEW: READ: io=25806MB, aggrb=880783KB/s, minb=219712KB/s, maxb=220599KB/s, mint=30001msec, maxt=30002msec
    
    eceKVMserv:  READ: io=3.7GB, aggrb=127MB/s, minb=32MB/s, maxb=32MB/s
    eceServ1:    READ: io=3.3GB, aggrb=113MB/s, minb=28MB/s, maxb=28MB/s	[1Gb/s]
    eceServ2:    READ: io=34GB, aggrb=1.1GB/s, minb=287MB/s, maxb=287MB/s
    eceServ1NEW: READ: io=26GB, aggrb=881MB/s, minb=220MB/s, maxb=221MB/s	[10Gb/s]
    

    AMD Servers Quartus Power and Performance Evaluation - June 2015

    Windowx XP, Antec 80Plus power supply, Asus M5A88-M AM3+ motherboard, 32G DDR3, SATA HD & DVDROM. Quartus compile is a DE2 demo circuit and simple ECE 124 stop-light circuit.

    The Linux Power Draw column include the computer specs above + Adaptec 6405e RAID card, with an SSD in addition to the SATA HD above

    Student lab computers use the 560 CPU, many of the 270's are available.
    AMD CPU Model Quartus Compile
    Demo / Stoplight
    BIOS Power Windows XP Power Draw Computers With This CPU Linux Power Draw
    Phenom II 560 2-core 3.3GHz 18 / 29, 27, 28, 25 sec 83W 48.5W eceLinux6,7,8,9.10, Mail, Arbeau, ieee 73W
    445 3-core 25 sec / crashes 68W? 46.0W eceLinux12 68W
    FX4300 4-core, 3.8GHz ? / 26, 33, 26 sec 62W? 44.5W 62W
    270 2-core 3.4GHz ? / 26, 27, 26 sec 80.0W 44.5W 65.4W
    250 2-core 3.0GHz ? / 43, 44, 36, 41 sec ? W 44.1W
    Phenom II 550 2-core 21 sec / ? ? ?
    1090T 6-core ? ? ? wo32
    1055T 6-core ? ? ? kvm
    FX 8350 8-core ? ? ? eceLinux1,2,3

    Asus H97 Motherboard Performance Issues - Dec 2016

    It was noticed that the Asus H97M-E motherboards, with SSDs, were signif. slower than 4 year old Asus P8H77-M motherboards and this holds for Win 7 and Win 10 even after trying BIOS and driver tweaks / updates. Windows performance tests of the SSD, RAM and CPU showed good performance.

    The performance tests with Linux were done using CentOS 6, Quartus 10.1 targeting Cyclone II (auto) from /praetzel/QuartusTest101/lab1

    Linux
    Quartus 13? targeting Cyclone II and IV, project on NFS share 1Gb/s networking
    Motherboard OS CPU Quartus Compile Times
    Seconds
    P8H61 CentOS 6 i3-2120 3.2?GHz 12, 12, 12
    Cyclone IV GX 20, 22, 21, 20
    H97M CentOS 6 i3-4170 3.7GHz 11, 12, 11
    Cyclone IV GX 19, 17, 19, 18
    H170M CentOS 6 i3-6700 3.7GHz 11, 12, 12
    Cyclone IV GX 19, 19, 18, 18
    P8H77-M ece-public20 Win 7, SSD i3-3220 18, 18, 19
    SSD: 12, 12, 12
    1Gb/s: 14, 14, 14
    ieee CIFS: 15, 15, 15
    Windows
    Quartus 15.1 targeting Max 10, project on N: cifs share, 100Mb/s networking
    H97M ece-mcu20 Win 7 i3-4170 3:28, 2:00, 1:56
    H97M ece-mcu20 Win 10 i3-4170 2:23, 1:53, 2:19
    SSD: 2:01, 2:00, 2:04
    H81M-C Win 10 i3-4150 3:14, 3:06, 3:07
    SSD: 2:50, 2:54
    P8H77-M ece-fpga29 Win 10 i3-3220 1:36, 1:37, 1:34
    P8H77-M ece-rtos7 Win 7 i3-3220 1:47, 1:29, 1:29
    P8H77-M ece-cpuio29 Win 7
    1Gb/s eth
    i3-3220 1:01, 0:53, 1:00
    P8H77-M ece-cpuio29 Win 7
    1Gb/s eth
    i3-3220 1:01, 0:53, 1:00
    P8H77-M ece-fpga29 Win 10 - April 21, 2017
    Fresh install
    i3-3220 N: 1:57, 1:58
    C: 1:57, 1:43
    H97M ece-mcu20 Win 10 - April 21, 2017
    Fresh install
    i3-4170 n: 3:09, 2:57
    C: 2:37, 2:38
    May 17, 2017 Network upgrade to 1Gb/s
    Motherboard OS CPU Quartus Compile Times
    Seconds
    H97M ece-mcu17 Win 7 i3-4170 1Gb/s N: 2:27, 2:18
    SSD: 1:55, 1:51, 2:06
    H97M ece-mcu18 Win 7 i3-4170 1Gb/s N: 1:10, 2:54, 2:51, 2:42
    SSD: 2:31, 2:44, 2:44
    H97M ece-mcu19 Win 7 i3-4170 1Gb/s N: 2:12, 1:35, 1:44
    HDD: 1:33, 1:26, 1:33
    Windows 7 Memory Performance Evaluation

    This was a simple test to evaluate if I should buy 1 or 2 sticks of RAM in the P8H77-M Asus M/B using DDR3-1333 4G sticks and the Windows Performance test:
    1 of 4G RAM: 5.9
    2 of 4G RAM: 7.5
    4 of 4G RAM: 7.5

    Motherboard Inplace Upgrade

    Asus H81M-C/CSM

    Feb 2014 - Upgrading from Asus M2Nmx-SE Plus and Win 7 - Either use a new image and re-install or get all but two drivers (PCI, Unknown) from the install USB key. Then install the setup.exe program in the Video and MEI directories under drivers.

    Asus P8H77M-C/CSM

    Upgrading from Asus M2Nmx-SE Plus and Win 7 - simply replace the motherboard, go from DHCP to fixed IP until DHCP has a new MAC set

    Windows 7 Computer Performance Evaluation

    Computer Name Windows Performance Score Specs
    CPU RAM Graphics Game Graphics Hard Drive Motherboard Other Specs
    Asus M2N PV VM Motherboard, AMD AM 2 CPU
    Control6 4.4 5.7 3.5 3.0 5.7
    Control7 4.4 5.7 3.4 3.0 5.7
    Control9 4.4 5.7 2.9 3.0 5.7
    Asus M2N SE Plus Motherboard, AMD AM 2+ CPU, DDR 2
    lab10 4.4 5.1 3.3 3.2 5.9 M2N SE Plus 160G HD, 4G DDR2, LE 1640 2.6GHz x1
    4thYear0 4.9 4.9 3.7 3.3/3.2 5.9 M2N SE Plus 160G HD, 4G DDR2 800MHz, AMD 6100 3.1GHz x2
    motor2 6.5 4.9 3.7 3.3 5.9 M2N SE Plus 160G HD, 4G DDR2
    motor8 6.5 4.9 3.7 3.2 5.9 M2N SE Plus 160G HD, 4G DDR2
    motor22 4.9 5.5 3.2 3.5 5.8 M2N SE Plus 160G HD, 4G DDR2, "4400" CPU x2 2.2GHz
    motor22 6.7 5.9 3.5 3.2 5.8 M2N SE Plus 160G HD, 4G DDR2, CPU "270" x2 3.4GHz
    motor13 6.5 5.9 3.6 3.2 5.8 M2N SE Plus 160G HD, 4G DDR2, CPU "250"? 3.1GHz x2
    motor17 6.5 5.9 3.7 3.3 5.8 M2N SE Plus 160G HD, 4G DDR2
    Asus M3A78 Motherboard, AMD AM 3 CPU, DDR 2
    circuits33 6.7 5.9 3.2 5.1 5.8 M3A78-CM DDR2 4G, 160G HD, AMD "270" 3.4GHz x2
    Asus M4A785T Motherboard, AMD AM 3 CPU, DDR 2
    cpuio7 6.7 5.9 4.4 5.4 5.9 M4A785T-M DDR3 1033 or 1333 4G, 160G HD, AMD 2.7 or 3.1 or 3.4GHz x2
    cpuio8 6.7 5.9 4.1 5.3 5.9 M4A785T-M DDR3 1033 or 1333 4G, 160G HD, AMD 2.7 or 3.1 or 3.4GHz x2
    cpuio1 6.7 5.9 4.1 5.2 5.9 M4A785T-M DDR3 1033 or 1333 4G, 160G HD, AMD 2.7 or 3.1 or 3.4GHz x2
    cpuio3 6.7 5.9 4.4 5.3 5.9 M4A785T-M DDR3 1033 or 1333 4G, 160G HD, AMD 2.7 or 3.1 or 3.4GHz x2
    Asus M5A88-M Motherboard, AMD AM 3+ CPU, DDR 3
    public01 6.6 7.4 4.3 5.4 5.9 M5A AMD Black x2 "560" 3.3GHz
    public02 6.6 7.4 4.3 5.4 5.9 M5A AMD x2 "560" Black 3.2GHz
    public06 6.6 7.4 4.4 5.5 5.9 M5A AMD x2 "560" Black 3.2GHz
    Asus P8H77-M Motherboard, Intel i3 CPU, DDR 3
    rtos1 7.2 7.5 1.0 1.0 5.9 P8H77-M DDR3 4G 1333 or 1600MHz, 160G or 500G WD HD
    rtos2 7.2 7.5 1.0 1.0 5.9 P8H77-M DDR3 4G 1333 or 1600MHz, 160G or 500G WD HD
    rtos3 7.1 7.5 1.0 1.0 5.9 P8H77-M DDR3 4G 1333 or 1600MHz, 160G or 500G WD HD
    ecestaf79 7.1 7.5 5.3 5.8 5.9 P8H77-M DDR3 8G 1333, 500G WD HD
    Asus H81M-C Motherboard, Intel i3-4130 CPU, 2 x 4G DDR 3
    testing 7.3 7.6 6.6 6.6 5.9 H81M-C DDR3 2 x 4G 1333, 160G WD HD

    Computer Performance Evaluation

    Altera Quartus is our heaviest software and is used as a performance indicator.

    For each test case several runs were performed. The first is often garbage.

    Feb 2016 and May 2014 Performanc Sumamry

    On Centos 5.11 using the ECE 327 tools - compiling multiple ECE 327 Heating System circuits.

    Winter 2015 the file server was upgraded to RAID 10 with SSD in parallel with 10k rpm, April 2014 tests were using 15k rpm SAS RAID 10

    # Compiles FX8350
    8-core
    i7-6700 i7-3770
    Apr 2014
    i7-3770
    Wierd!
    Celeron G1620
    May 2014
    i3-2120
    May 2014
    i3-4130
    May 2014
    Celeron G3420
    May 2014

    May 2014
    1 0:29 0:19 0:23 0:50 0:31 0:27 0:24 0:25
    2 0:30 0:18 0:22 0:57 0:34 0:28 0:25 0:28
    4 0:33 0:19 0:22 1:14 0:58 0:39 0:35 0:42
    8 0:43 0:32 0:30 1:25 1:45 1:10 0:59 1:22
    16 1:21 1:03 - 1:02!! - - - -

    May 2013 Performance Summary

    On CentOS 5.6 Quartus 10.1 comparing AMD 6-core 3.2GHz 1090T to 8-core 4.0GHz FX 8350. Enabling the overclocking utility on the M5A88-M motherboard made no performance difference.

    # compiles 8-core
    seconds
    6-core 1090T
    seconds
    2 46 56
    4 55 60
    8 70 73
    16 120 118

    May 2017 ECE 423 Quartus Performance Summary

    This is a test compiling the ECE 423 Cyclone V project on Quartus 15.1.

    The i3-4170 on the Asus H97M-C are slow for Win 7 and Win 10 only - excellent Linux performance - Driver Issue??

    OS #Cores, base clock,
    boost clock
    CPU Cost Setup Synthesis Time
    Hour:Min:Sec
    Passmark Single Threaded Perf
    Win 10 6, 3.2GHz, 4.6GHz $?? i7-8700, AORUS Z370 Ultra Gaming MB, on 500G HDD 19:06, 18:43, 18:45 min:sec 15,240 / 2540
    using local 250G SSD 17:48, 17:37, 17:53
    using samba share to ieee 20:08, 20:15, 20:06
    Win 10 1803 4, 3.2GHz, 4.6GHz - i7-8700, H310I-PLUS MB,
    on 500G Samsung 970 SSD
    18:20, 18:06, 23:09 15154 / 2779
    on N: drive 19:54, 23:14, 19:49, 20:38
    Win 10 4, 4.2GHz, 4.5GHz $470 i7-7700K, H170M MB, on 250G SSD est. 19 min 12,130 / 2580
    Windows 10 1709 6, 2.8 to 4.0GHz $225 i5-8400, 2x8G, Asus H310Mi-Plus 20:02, 20:01
    i5-8400, 2x16G, Asus H310Mi-Plus 19:32, 19:28, 19:37
    Windows 10 1709 4, 3.6GHz $167 i3-8100, 1x16G, Asus ROG Strix H370-I 22:16, 22:14 (Are these correct? 20:21, 20:23, 21:18 from the Asus H310Mi-Plus MB?)
    i3-8100, 2x16G, Asus ROG Strix H370-I 22:10, 21:45, 21:48, 21:44
    Win 10 4, 3.4GHz, 4.0GHz $400 i7-6700, H170M MB, on 250G SSD 24:09, 22:33, 21:15, 21:09, 21:10, 21:08 10,010 / 2150
    Win 10 4, 3.2GHz, 3.6GHz $270 i5-6500, H170M MB, on 250G SSD 25:02, 22:57, 22:36, 22:28, 22:31 7230 / 1950
    Win 10 + Avast 22:56, 22:36, 22:35
    Win 10 4, 3.2GHz, 3.6GHz $270 i5-6500, H170M MB, on 500G HD 23:24, 22:56, 22:37 7225 / 1950
    Win 10 2, 3.5GHz, 4.0GHz $NA Intel NUC i7-7567U, on NVMe 512G M.2, 32G RAM (MME) 25:40, 24:40, 24:39 6542 / 2267
    Win 10 4, 2.6GHz, 3.5GHz $NA Intel NUC i7-6770HQ, on NVMe 256G M.2 26:00, 25:51, 25:58 9690 / 1903
    Win 10 2, 3.5GHz, 4.0GHz $NA Intel NUC i7-7567U, on NVMe 256G M.2 (ECE) 27:45, 26:58, 27:28, 27:23, 26:08 6520 / 2260
    Win 10 2, 3.8GHz, - $232 i3-6300, H170M MB, 2 x 16G DDR4, on 250G SSD 26:39, 26:58, 26:35, 26:46, 26:11 5850 / 2165
    Win 10 2, 3.8GHz, - $232 i3-6300, H170M MB, 1 x 16G DDR4, on 250G SSD
    Win setting "performance" no graphical effects
    28:55, 26:29, 26:18 5850 / 2165
    Win 10 1709 4, 3.5GHz, 3.7GHz $150 Ryzen-3 2200G, 16G 2666MHz, MSI B350M PRO-VDH MB, on 250G SSD 27:25, 25:11, 25:09 ? / ?
    Win 10 8, 3.4GHz, 3.8GHz $480 Ryzen-7 1700X, Prime X370-Pro MB, on 250G SSD 27:55, 27:05 14,640 / 1865
    Win 10 8, 3.0GHz, 3.7GHz $415 Ryzen-7 1700, Prime X370-Pro MB, on 250G SSD 28:37, 28:44, 28:35, 28:37 13,790 / 1765
    Win 10 8, 3.0GHz, 3.7GHz $415 Ryzen-7 1700, 64G 2133MHz, MSI B350M PRO-VDH MB, on 250G SSD 26:18, 25:35, 25:32 13,790 / 1765
    Win 10 12, 3.5GHz, 4.0GHz $1,200 Threadripper 1920X, X399 AORUS Gaming 7 MB, on 500G HD 27:59, 27:23, 27:36 19455 / 2029
    Win 10 12, 3.5GHz, 4.0GHz $1,200 Threadripper 1920X, X399 AORUS Gaming 7 MB, on 250G SSD 28:02, 27:17
    Win 10 2, 3.7GHz, - $155 i3-6100, H170M MB, on 250G SSD
    running lots of other SW
    32:07, 29:42, 30:34 5485 / 2105
    running little or no other software 29:01, 27:51
    using 500G 7200rpm HDD 28:50, 29:07, 29:13
    Win 10 2, 3.3GHz, - $NA i3-3220, P8H77-M MB, on 500G HDD 59:39, 1:00:24 4225 / 1760
    Win 10 2, 3.7GHz, - $NA i3-4170, H97M-C MB, on 500G HDD 1:30:13, 1:31:31, 1:31:04 5180 / 2130
    Win 10 2, 1.6GHz, 2.7GHz $NA Intel NUC i5-5250U, on NVMe 256G M.2, 16G DDR3L 1:45:15 3630 / 1450

    March 2017 Quartus Performance Summary

    The circuit is trival - stoplight from ECE 124 targeting the Max 10 used on the LogicalStep board with Quartus 15.1 on CentOS 7 and Windows 10.

    If the Windows 10 was on the Nexus domain or not did not matter. In the past that doubled synthesis times.

    OS Setup NetworkSynthesis Time
    seconds
    Windows 10 1709
    i5-8400 2.8 to 4.0GHz, 2x8G H310MI-Plus, boot SSD
    Local SSD 1Gb/s 43, 40, 40
    Windows 10 1709
    i3-8100 3.6GHz, 1x16G ROG Strix H370-I, boot SSD
    Local SSD 1Gb/s 45, 43, 43
    Windows 10
    i3-6100 3.7GHz, 16G H170MB, boot SSD
    1Gb/s 43, 42
    Samba to ieee server 61, 54, 50, 50, 50
    NUC i5-5250U on nvme SSD, 16G DDR3L 3:13, 3:08, 3:08
    Samba to ieee3:22, 3:26, 3:23
    using Nexus N: drive 4:13, 3:52, 3:42
    i5-6500 on SSD 44, 41, 41, 41
    Samba to ieee49, 50, 50, 49, 49
    on Nexus using N: drive 50, 46, 46, 45
    H170, i3-6100 office Nexus machine N: drive 1Gb/s 50, 51, 50
    Samba to ieee57, 54, 56
    Samba to eceServ161, 67, 64, 67!!
    H170, i7-7700K, 10Gb/s Ubuntu 18.0410Gb/s 38, 38, 37, 37
    local SSD 34, 33, 32, 32
    H170, i7-6700, 1Gb/s N: drive1Gb/s 52, 51, 52, 63
    Samba to ieee47, 47, 46, 47
    Samba to eceServ158, 61, 57, 58
    local SSD 53, 39, 39, 39, 39
    eceLinux1, i7-6700, 64G, 1Gb/s NFS to eceServ1 1Gb/s 39, 44, 40, 42
    local SSD38, 37, 36, 36
    eceLinux9, Ryzen 1700X CentOS 7, 64G 2133MHz KVR, 10Gb/s NFS to eceServ1 10Gb/s 44, 44, 42, 43
    local HDD38, 37, 38, 39
    Ryzen-7 1700 Win 10, 64G 2133MHz KVR, 1Gb/s Samba to ieee 1Gb/s 58, 54, 54, 54
    ECE P: drive71, 62, 60, 59
    Nexus N: drive56, 55, 53
    local SSD49, 49, 48
    VM CentOS 7 on i5-6500 3.2GHz with VM on FreeNAS via NFS VM has 2 cores 61, 51, 52, 51
    VM has 4-cores52, 51, 51
    Intel NUC i7-7567U 2-core 2.6 3.5GHz boost (MME) local NVMe M.21Gb/s 40, 39, 40
    Samba to ieee50, 51, 51, 56
    Intel NUC i7-6770HQ 4-core 2.6 to 3.5GHz local NVMe M.21Gb/s 42, 41, 41, 41
    Intel NUC i7-7567U 2-core 3.5 to 4.0GHz (ECE) local NVMe M.21Gb/s 45, 43, 43, 43
    Samba to ieee54, 53, 53, 53
    Intel Gold 5120 Xeon 14-core 2.2 to 3.2GHz
    running Ubuntu 18.04 LTS
    NFS eceServ110Gb/s 55, 54, 54
    local SSD49, 51, 49, 50
    Intel Gold 5120 Xeon 14-core 2.2 to 3.2GHz
    running CentOS 7.3 Linux
    with 3.10.0-693.11.6 kernel does not clock past 2.2GHz
    NFS eceServ11Gb/s 57, 57, 58
    local SSD74, 51, 53, 53
    AMD Threadripper 12-core 3.5 to 4.0GHz
    running CentOS 7.3 Linux
    local 500G HDD1Gb/s 52, 36, 36, 35
    Samba to P: drive41, 40, 39
    AMD Threadripper 12-core 3.5 to 4.0GHz
    running Win 10 Edu
    local 500G HDD1Gb/s 42, 43, 43, 42
    local SSD44, 42, 42
    Samba to ieee48, 48, 50, 48
    Intel i7-8700 6-core 3.2 / 4.6GHz boost
    running Win 10 Edu.
    local 500G HDD1Gb/s SSD: 39, 39, 38, 39
    HDD: 42, 39, 38, 39
    Samba to ieee44, 43, 44, 43
    Intel i7-8700 6-core 3.2 / 4.6GHz boost
    running CentOS 7 3.10.0-693 kernel
    local 500G HDD1Gb/s 50, 37, 37, 38, 37
    NFS to eceServ41, 41, 42

    March 2014 Quartus Performance Summary

    This is compiling a trivial circuit - 3 LE's and 14 I/O pins.

    To use ModelSim ASE Tools -> Options -> EDA Tool Options and then set ModelSim Altera to c:\Software\Altera\13.1\modelsim_ase\win32aloem


    Altera tutorial for the VWF simulator: ftp://ftp.altera.com/up/pub/Altera_Material/13.0/Tutorials/Verilog/Quartus_II_Simulation.pdf

    Quartus Version FPGA FamilySynthesis Time
    seconds
    Memory Use
    9.0, 10.1 x86 Cyclone II 25 200M
    13.0 x86 or x64 Cyclone II, III, IV 26 to 33 300M
    13.1 x64 Cyclone II, III, IV 30 to 33 300M
    13.1 x64 Cyclone V 83 to 99 1.1G

    June 2012 Performance Summary

    Using CentOS 6.2 x64 with Quartus 11

    Computer Power Use 80Plus Bronze
    2W when off/standby
    Number of simul. compiles
    BIOS Linux 1x 2x 4x 8x
    Intel i5-2320 on P8Z77-V LX
    programs are on NFS server
    Sparkle ATX-450 PN PS
    60.4W 51.5W 39,38 42,41 49,42 1:08, 1:08
    Intel i5-2320 on P8Z77-V LX
    programs are on local HD
    Sparkle ATX-450 PN PS
    60.4W 51.5W 34,34 37,37 42,41 1:03, 1:05, 1:04
    Intel i5-2320 on P8Z77-V LX
    Sparkle ATX-450 PN PS
    60.4W 51.5W 38 41 42 1:08, 1:08
    Intel i3-2100 on P8H77-M 40 27W Win 7 (65W pk) 38 42,42 53,56 1:30, 1:30
    AMD "550" on M4A785T-M 78W 43W (Win XP) 47, 49 49, 50 1:16, 1:19 2:02, 2:12
    AMD "560" on M5A88-M 81W 56W (Win 7)
    AMD "1090T" on M5A88-M 112W 89W (160W pk) 49, 53 51, 53 56, 56 64, 61

    October 2011 Performance Summary

    Quartus 10.1 x64, Win 7, 4G RAM, M2N PV-VM, DDR2 Q11.1 x64, Win 7, 4G RAM, M5A88V Q9.0 x32, Win XP, 4G RAM, M3A785 Q 10.1 x64 Win 7, Intel i3 4G RAM, P8H77-M Q10.1 x64 Win 7, 4G RAM, M5A88-M
    Disk 2.2GHz x1 2.5GHz x2 2.5G x2 + SAV 2.7GHz LE1640 "250" 3.0GHz x2 DDR3 "270" 3.4GHz x2 DDR2 i3-2100 3.1GHz x2 3M cache, DDR3 "560" 3.3GHz x2 DDR3, 7M cache
    Nexus N: 33,39,38 25,26 30,28,25,30 39,29,29,29 31,32,32 39,27,24,25
    Win7 Q10.1 is 20, 20,19,19
    20, 21, 22, 21
    i3-2120 is 18, 17, 18
    25, 21, 20, 19
    Local HD 26,26 24,18,19,18 19,19 - 13,13,13 23,21 12, 12 12, 12, 12
    Linux IEEE (single HD) 34,32 22,23,23 - - 43,26,25 25,25,25 16, 18, 17, 16 18, 18, 18
    Linux P: (SAS 15k rpm RAID 6) 36,34 25,24 25,25 - - -
    USB Key 55,56 47,43 - - 40,44,40 41,41,43 34, 36, 37 35, 34, 34

    Summer 2011 Quartus 9.0 vs 10.1 Performance Tests

    Hardware RAM Q 9.0 x32 Win XP Q10.1 x32 Win XP
    Eric's Nexus 3.1GHz x2 3G 37, 28, 26 35, 23, 25
    Q9.0 x64 3.1GHz x22G 23, 11, 10, 10
    2.6GHz 1-core2.5G 33, 23, 21, 21
    2G 34, 23, 25 37, 23, 22
    1G 34, 26
    512M 166 !!
    2.5GHz 2-core1G 33, 23, 21, 21 46, 22, 22
    2G 32, 21, 22 38, 19, 19
    3.1GHz 2-core2G 28, 17, 18 34, 14, 15
    4G 28, 17, 17 34, 14, 15
    2.3GHz 2-core, M2N-PV, Win 74G 32, 22, 22 19, 19
    2.2GHz 1-core, M2N-PV, Win 74G 38, 27, 31, 27 28, 26, 26, 26

    Computer Problems

    RDP EngTerm Issue

    If having issues with Remote Desktop (RDP) to EngTerm have the user delete the file default.rdp from the Documents folder - the file is likely hidden so turn on show hidden files under the view folder options.

    MobaXterm

    Users install make but it doesn't follow them. MobaXterm unpacks itself into the user profileAppData/Local/Temp/Mxt111 or Roaming ???. If make is installed it's 100MB - too big for the profile.

    One option is to pre-install the tools and unzip it into C:\software\mobaxterm-root - but will it work as read-only? - July 2019

    Prolific Driver, PL2303, i-7561, Windows 10

    The correct driver for our RS485 devices is NOT the Prolific driver but the driver in the file SCADA-driver-i-756x_1223_ driverinstaller.exe In Windows the device should show up as i-756x driver. If the Prolific driver is detected (it has the same PID and VID 2303, 067B) then right click and delete the driver. Note: The Prolific driver seems to be downloaded by Windows automagically. Apr 2019

    ETap 16.0, 16.1 and 18.0

    Users can start ETap but when running a simulation an error, related to the database such as:
    Failed to connect to database" ETap 18
    "exception retrieve database version information ... SQL cannot create automatic instance 0x89C50118" on ETap 16.0 or 16.1

    The fix:
    ETAP 14 and higher ETAP versions use SQL 2012 Express Local DB for project database. It seems the SQL instance require to connect to the ETAP projects is failing.
    Please double click to run the file 'CleanUpLocalDB.cmd' located at Folder 1) C:\ETAP 1800\Other\CleanUpLocalDB.cmd

    ComSol 4.4 and Win 7

    Simulations result in black output. The problem is mentioned here https://www.comsol.com/support/knowledgebase/933/ and the fix is:

    The quickest solution is to switch to software rendering:
    
    Start COMSOL Multiphysics.
    
    To open the Preferences dialog box, in the COMSOL Desktop:
    
    Windows users: From the File menu, select Preferences.
    
    Cross-platform (Mac and Linux) users and COMSOL Version 4.0 to 4.3b: From the main menu select Options>Preferences.
    
    In the Preferences window select Graphics and Plot Windows (Version 4.4 and later) or Graphics (Versions 4.0 to 4.3b) and set the Rendering option to Software.
    
    Click OK and close the COMSOL Desktop.
    

    Ryzen & CentOS 7

    7 June 2018 - Ryzen 3 2200G works with Ubuntu 16.04 LTS or newer and I can see the BIOS - but only with direct connection to the monitor - not thru the KVM switch I always use.

    As of March 2018 - Ryzen 3 2200G APU will not turn on the video in an MSI B350M PRO-VDH motherboard. Switching between Ryzen 7 and 2200G requires resetting the BIOS. It will boot and run Linux fine with the 2200G.

    CentOS 7.4 is unstable on the Ryzen 7 & MSI B350M PRO-VDH motherboard crashing at least daily with a CPU core "stuck".

    Ubuntu 16.04 with 4.13 kernel is stable with the Ryzen 7. About 6 months previous CentOS 7.3 was stable with Ryzen.

    KVM - unable to migrate VMs

    Virt IO Advanced -> Performance ->Cache NONE to allow migration.

    CentOS 7 Apache upgrade creates timezone errors on PHP

    Just add date_default_timezone_set ( "America/Toronto" ); before time functions

    mysql on eceLinux4

    Use this to find the settings: mysqld --verbose --help
    max_connections = 151 , open_files_limit = 5000
    We're hitting the open_files limit regularily

    Note that checking the settings shows them with "-" instead of "_". Ie max-connections 151

    sftp failure with CentOS

    Feb 2014 - sftp to the Linux machines fails if the account is configured for ECE327. The bash or csh printing the message about the account configuration for ECE 327 makes sftp fail with "Received message too long 1229866575". The solution is to wrap the 327 setup script:

    if ( $?TERM == 1 ) then
    #       set noglob
    	source /home/ece327/setup-ece327.csh
    #       unset noglob
        endif
    
    

    Asus P8H77-M Motherboard Failures

    As of 2006 I'm seeing two failure modes for the Asus P8H77-M motherboard:

    In all cases I don't see failing capacitors. Out of about 120 installed, 3 have failed totally, 2 have lost the NIC and 4 are hanging - within 4 years.

    Asus M4A785T-M Motherboard Capacitor Failure

    The two 820uF caps by the SATA connectors puff and fail starting 2.5 years after being put into service. It only seems to happen to the capacitors stamped with a "+" on the top (Feb 2013). Failure of the caps results in a failure of the motherboard to boot. Replace them and all motherboards work again.

    Asus M2NPV-VM Motherboard and PS/2 and USB

    Enabling CPU Virtualization seems to solve the interrupt issue (mouse or keyboard not working if they're USB) which can plague the last BIOS

    ECS P6BXT+ (? 333MHz AMD K6 CPU) Motherbard Capacitor Failure

    Caps by the voltage regulator fail - half of the time destroying the motherboard.

    Computer Evaluation

    Jan 2008 - ComSol P3 vs P4

    P3 systems are 600MHz to 1GHz with 256M to 384M of RAM. P4 systems are 1.5 to 2.4GHz with 512M ram. A Comsol 3D simulation was run on all machines.
    P3's -- 19 to 25 seconds
    P4's -- 8 to 10 seconds

    Note that Intel has had the rug pulled from them by AMD and are
    moving fast to catch up.  They've "killed" their Pentium line of
    processes and are now calling them "core".  These are due to come
    our real soon now.   I believe that they've, finally, wrapped the
    Intel Mobile power saving features into their desktop CPUs to
    reduce power consumption.
    
    PROBLEMS
    --------
    1) Only one serial port and it doesn't come out of the case by default.
            We need 2 serial ports for the Coldfire computers.
            We can do this via a dual serial port card ($52 per card).
            Another possible solution may be USB <--> Serial adapters but
              this is very unlikely as the serial communication programs do
              not yet support USB.
    
    2) Fedora Core 5 installed and worked well - AFTER I droped in a supported
            network card and did a full OS update.
    
    TESTS
    -----
    AMD system works with Norton Ghost (finding NDIS drivers was awkward and
    the boot CD had to be manually massaged).
    
    AMD system works with auto and locked network speed/duplex.
    
    AMD system seems to automatically use power saving features with Fedora
    Core 5, and it's easy to add with Windows (enable Minimal Power Saving
    Mode after installing Power Now.
    
    
    Quartus Performance Tests - Winter 2014
    ----------------------------------------
    
    While considering FPGA boards to replace the DE2 it was discovered that synthesis times were dramatically larger for newer boards.  The circuit being compiled was close to trivial - just a few gates between input and output.
    
    DE2 Cyclone II 35,000 LEs
    DE2-115  Cyclone IV  115,000 LEs
    DE1-Soc  Cyclone V
    
    Quartus II 13.1 Synth. Times
    - - - - - - - - - - - - - - 
    Cyclone III EP3c5f256C6  42, 38 sec
    Cyclone IV EP4ce115f23C7   1:05, 1:04,:1:02
    Cyclone V 5csema5f31C6  2:18, 2:23, 2:22
    
    
    ECE 224 / 324 / 325 Test
    Eclipse -> New - > NIOS II SBP Template
    Start -> Nios -> Build Tools for Eclipse, use project directory /software
    Generate: 6:09, 6:15 with USB drive; on the N: disk 3:38, 2:04
    
    
    PERFORMANCE
    -----------
    This compares performance using Altera Quartus II 6.0 for compiling a sample
    ECE 325 processor in VHDL.  This is the most CPU intensive application in
    our PC labs.  Compile times are currently around 5 minutes using an Quartus II
    3.0 on existing P3's (1.2 to 1.4 GHz).
    
    Power Draw
    ----------
    P4 3.0GHz HT System - 200W power draw, noisy fans
    P4 3.0GHz Celeron Dual System - 110W, noisy fans
    P4 2.4GHz Celeron - 85W, noisy fans
    
    P3 System - 55W power draw
    
    Historical power consumption:
    	P2 System - 47W 	
    	P1 System - 33W
    	486 System - 25W
    	386 System - 32W
    
    AMD System - 55W power draw most of time, peaking 100W when number crunching
    
    
    Performance
    -----------
    P3-1.2GHz - 2:45 compile time on C: disk (2:50 on N: disk!)
    
    P4-2.4GHz Celeron - 1:25 compile time
    P4-3.0GHz D Celeron - 1:20 compile time (dual cpu Celeron)
    P4-3.0GHz HT P4 - 1:10 compile time (hyper-threading, quasi dual CPU)
    
    AMD Athlon 64 2.2GHz (rated 3.5GHz) - 1:00 compile time
    AMD Athlon 64 Dual 2GHz (rated 3.8GHz) - 1:00 compile time (dual CPU)
    
    System Info:
    AMD - joined to Nexus, minimal software install
    P3 - typical Nexus machine, fully loaded with s/w and NAV
    P4 - not on Nexus, NAV added a 1 sec delay to compile times 
    
    System Cost:
    AMD 2GHZ Athlon 64 A8N-VM, $447
    AMD 2.2GHZ Dual Athlon 64 A8N-VM, $630
    
    P4P800VM P4 $503
    P4P800VM Celeron dual $337
    	CPU prices +$120 for P4, +$230 for Mobile P4
    AMD CPU		2GHz mobile $106,
    		2GHz Athlon 64 $197,
    		2.2GHz Dual Athlon 64 $381,
    
    July 2006 - NOTE new AM2 processor using DDR2 coming no CSM M/B yet.
    
    

    Investigation

    Card Swiper Configuration

    To set the baud speed (1200N81) with Centos 5.2 in rc.local add "stty -F /dev/ttyS0 1200"

    To set the automatic login (GUI widget doesn't work) edit /etc/gdm/custom.conf

    Add the user to the group which holds the serial port to allow access.

    There is no password, and auto login, for the default account so disable keyboard locking.

    HTTPS on Centos 5.x

    genkey server_name.x.com

    Apache rewrite to redirect http to https

    From http://davmp.kimanddave.com/2008/03/30/installing-mailman-to-use-https-on-centos-51/

    # You.ll need to insert two RewriteRule lines in your httpd config files to redirect all non-https requests for Mailman features to the https site. And if you don.t have any rewrite features setup elsewhere, you.ll need a couple of other lines. You can find out the most about this process by reading the Apache docs for the RewriteEngine here. But, since I.ve already got a virtual host file that represents the config I want to have Mailman show up as a part of, I simply added lines like the following:
    
    
       ...
       RewriteEngine        on
       RewriteCond          %{HTTPS} !=on
       RewriteRule          ^/mailman/(.*) https://davmp.kimanddave.com/mailman/$1 [L,R]
       RewriteRule          ^/pipermail/(.*) https://davmp.kimanddave.com/pipermail/$1 [L,R]
    
    
    
       ...
       Include "conf.d/mailman.conf.include"
    
    
    And then renamed /etc/httpd/conf.d/mailman.conf to /etc/httpd/conf.d/mailman.conf.include. These settings prevent Apache from allowing these URLs to work for any other virtual hosts. 

    Investigation

    Sunfire V65x, Intel SE7501WW2 motherboard, 4G, Dual dual core 3.06GHz Xeon (32 bit), Nov 2008

    The onboard raid is Adaptec 7902 software raid. It may support hot swapping of drives available at boot time - but it does not support adding extra drives when booted.

    Power draw for these servers is around 220W at boot time and 140W when running Centos 5. A modern quad core Xeon blade server sucks 140W with dual 15k RPM HDs. Both are much higher than comparable AMD systems (typically 45W for a dual core in light use).

    I tested the system by setting up RAID 1, installed the OS and pulled one HD. Immediately the OS gave errors about the pulled drive. Booting into the RAID controller software revealed that the RAID array was "optimal"! Reboot with the HD re-inserted and all seems well. I was not able to find how the software raid was being done. /proc/mdstat revealed nothing. When I pulled the one harddrive again the OS pretty well hung. This is symptomatic of software RAID with RedHat.

    The raid used revolves around the dmraid commands. "dmraid -l" to list support -r to list the current setup and driver. Then "-s -s asr_raid1array" lists the raid particulars.

    The Ultra 320 SCSI HDs are curious. I've seen one where the BIOS, at boot time, flagged it as failing - but the SMART tools said that the health status was fine. The drive had bad sectors but SMART monitoring was not reporting that - only the temperature. I inserted a failing HD and it was not detected at boot time - but the SMART Health Status was failing.

    My conclusion is that the RAID on these blade servers is less than useless.

    Hard Drive Performance Tests - November 2014

    Increasing SSD, 2x striped SSD, 3x striped SSD

    Intel i3 testbed Asus P8H77-M motherboard
    4 x 8G RAM, Adaptec 6805e RAID Controller
    SizeSequential OutputSequential InputRandom Seeks
    Per Char
    K/sec
    %CPPer Block
    K/sec
    %CPRewrite
    K/Sec
    %CPPer Chr
    K/sec
    %CPBlock%CPRandom Seeks
    /sec
    %CPFiles
    i3 testbed, tmp, 8G RAM ST 80G HD - Nov201415488M113549981920801313783214137832141132829672206035

    Increasing SSD, 2x striped SSD, 3x striped SSD

    Intel i3 testbed Asus P8H77-M motherboard
    4 x 8G RAM, Adaptec 6805e RAID Controller
    SizeSequential OutputSequential InputRandom Seeks
    Per Char
    K/sec
    %CPPer Block
    K/sec
    %CPRewrite
    K/Sec
    %CPPer Chr
    K/sec
    %CPBlock%CPRandom Seeks
    /sec
    %CPFiles
    Mirrored
    WD Green mirror63960M9186182937026467694103267921969948437.3116
    i3-ssd-5400x1-mirror63960M1110099511433787180461103059359923225
    i3-ssd-5400x1-mirror63960M1061619210777176843061109729460273425
    SSD mirroring 500G 7,200 rpm63960M1155399913034697877971105579460007325
    1TB SSD mirroring 10k rpm Velociraptor63960M11535399148500108622571111059459211325
    i3-ssd-SAS-mirror63960M115926992069261410078991131429658693824
    1TB SSD mirror63960M1155859940820326203795191168089983045135
    Striped Across Two Drives
    i3 testbed RAID 10 2x1TB SSD, 2x Green WD, 8G RAM - Nov201415488M113549981920801313783214137832141132829672206035
    i3 testbed RAID 10 2x1TB SSD, 2x Green WD - i3-2SSD-2greenHD, 32G RAM63960M1158409919100613111572101164889974861033
    i3-striped7200s63960M11534699267073189724181119019527732212396.4116
    i3-stripedVelociraptors63960M11552399257582179308481127799626694611670.2116
    i3-2SSD-2greenHD63960M1158409919100613111572101164889974861033
    i3-ssd-7200-x2-raid1063960M1163519926661918116523111152629844419519878.0216
    i3-ssd-velo-x4-raid1063960M11531199260695171237641111436897464339201486.9316
    i3-ssd-SAS-x4-raid1063960M11321899400418261807541710928397569758252124.8416
    i3-striped15kSAS63960M1164919941857626161056141144959743317919986.1216
    i3-striped15kSAS63960M1163379938768125148908131148099740412417961.7216
    2 1TB SSDs striped63960M1133509980088550292968281161289883125635
    Striped Across Three Drives
    i3-WDgreen-strip3, no SSDs63960M115333992854901910874891133249629421612452.8116
    i3-SSD-WDGreenRAID10x6 (redone)63960M1156399928515419167755161144459774185033
    i3-SSD-WDGreenRAID10x6 (redone again)63960M1156429927986918165685161144909774386833
    i3-ssd-7200-x3-raid1063960M11542799381174241644071511461197550458241333.0316
    3 1TB SSD's striped63960M1131289990573858333320331110769984057236
    FreeNAS vs CentOSSizeSequential OutputSequential InputRandom SeeksSequential Create Random Create
    Per CharPer BlockRewritePer ChrBlockRandom SeeksFiles CreateReadDeleteCreateReadDelete
    K/sec%CPK/sec%CPK/Sec%CPK/sec%CPK/Sec%CP/sec%CP per sec%CP per sec%CPper sec%CPper sec%CPper sec%CPper sec%CP
    FreeNAS 4 x 1TB SSD to Client via 10GBe Network, March 2016
    SuperMicro FreeNAS 10Gbe with 4x 1TB SSD to SuperMicro 10Gbe CentOS 7128168M18271099106187329311626151647019733180083547.2716602711++++++++618112612712++++++++111029
    SuperMicro FreeNAS 1Gbe with 4x 1TB SSD to SuperMicro 1Gbe CentOS 7128168M114647100160715601440681436801745.331610950280880163801362044193623290
    SuperMicro FreeNAS 10Gbe i5 server with 3 x 1TB RAIDz, CentOS 6 i3 10Gbe client, FreeNAS10gb63848M1859249911112803317926281677699123013142867.3116457111++++++++376111517014++++++++836115
    SuperMicro FreeNAS 100Mb with 4x 1TB SSD to SuperMicro 100Mb CentOS 7128168M11468141146716901112404241317511070.06164931117779967794991210423210050
    AMD 8-core FX to 1Gb/s eceSERV RAID 10 3 of SSD and 3 of HD63840M639759696058745568106580099100507101960.0716111332531611426110642792513825
    eceLinux1-InfiniHost-to-i3-raid10-ssd-10krpm-Serv-NFSoRDMA128304M1578577910552161362721218375499608778173117.1111633485++++++++562053574620763968456
    eceLinux2-to-i3-raid10-ssd-10krpm-eceServ-NFSoRDMA128304M148413758677751367151418552599612089392772.81316957710++++++++1548111824910++++++++1254813

    InfiniBand Data Corruption - July 2016

    Setup HP ConnectX-2 cards connected to a Voltaire 4036 switch at QDR speeds with Mellanox cables. CentOS 6 on the NFS file server - using Datagram or Connected mode to share files to CentOS 6 or CentOS 7 clients. All CentOS machines use the stock InfiniBand support. NFS sharing using IPoIB or RDMA.

    Symptom Large text files (18M and 160M) with 2 columns of numbers get corrupted in a repeatable pattern. Using RDMA results in might higher corruption (7772 vs 960 lines corrupted). No errors are reported by the OS or switch. Using ibqueryerrors and diffing it before and after reading a corrupted file shows no change in the errors.

    Ethernet vs InfiniBand (SDR, QDR)

    These tests were done using cheap i3 and AMD based systems running CentOS 7. InfiniHost III DDR cards were used with a SDR (10Gb/s clock or 8Gb/s actual switch). Direct connected ConnectX-2 cards were used for the QDR tests. The client and server booted from 7200 rpm 500G HDs. For performance tests a Samsung EVO 850 500G SSD was used and it's performance maxes out at about 10Gb/s. However, the latency of the SSD is much lower than that of 10Gb ethernet.

    A very easy test to run is fio on the links. The command options used were:
    fio --rw=randread --bs=64k --numjobs=4 --iodepth=8 --runtime=30 --time_based --loops=1 --ioengine=libaio --direct=1 --invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap --exitall --name task1 --filename=/testing/junk/1.txt --size=10000000

    Network GB Data
    in 30 sec
    Aggregate
    Bandwidth (MB/s, Gb/s)
    Bandwidth
    (MB/s, Mb/s)
    latency (ms) iops
    QDR IB 40Gb/s
    NFS over RDMA
    943,100, 25 802, 6.4 0.615 12,535
    DDR IB 20Gb/s
    NFS over RDMA
    24.4834, 6.7208, 1.72.43256
    SDR IB 10Gb/s
    NFS over RDMA
    22.3762, 6.1190, 1.52.572978
    QDR IB 40Gb/s16.7 568, 4.5 142, 1.1 3.4 2218
    DDR IB 20Gb/s13.9 473, 3.8 118, 0.94 4.1 1845
    SDR IB 10Gb/s13.8 470, 3.8 117, 0.94 4.2 1840
    10Gb/s ethernet5.9 202, 1.6 51, 0.41 9.7 793
    1Gb/s ethernet3.2 112, 0.90 28 17.8 438
    100Mb/s ethernet346MB 11.5 2.9 174 45
    10Mb/s ethernet via switch36MB 1.2 279kB/s 1797 4
    10Mb/s ethernet via hub33MB 1.0 260kB/s 1920 4

    NOTE: It is clear from the NFS over RDMA data above that the ConnectX-2 card (QDR or 40Gb/s) has signif. better performance than the older InfiniHost III cards run at SDR or DDR speeds

    Ethernet vs InfiniBandSizeSequential OutputSequential InputRandom SeeksSequential Create Random Create
    Per CharPer BlockRewritePer ChrBlockRandom SeeksFiles CreateReadDeleteCreateReadDelete
    K/sec%CPK/sec%CPK/Sec%CPK/sec%CPK/Sec%CP/sec%CP per sec%CP per sec%CPper sec%CPper sec%CPper sec%CPper sec%CP
    NFSoIB InfiniBand Tests using CentOS 7
    TEST TO LOCAL 7200rpm HD on client?? NFS1gB-to-ssd128128M9325352789563399272104771611263553146.601611623200428115111422241231151
    40Gb/s InfiniBand QDR-IB-NFS to a 7200rpm HD128128M805895473291740351797960611137448140.20161152++++++++11511132++++++++1141
    1Gb/s ethernet to SSD on server128128M1115875911427328836131234857114715327349.881618558++++++++434051975723271052433
    IB-SDR-to-ssd128128M17250498372110152053301415824099702011238275.73316458014++++++++840117446116++++++++1931814
    40Gb/s InfiniBand QDR-IB-NFS to an SSD128128M185401985075883124178617168968997020652212098.41816663115++++++++952120646618++++++++2184918
    IB-QDR-NFSoRDMA-to-ssd128128M18372198505062332512272417513699704626459183.045161099715++++++++17611181089718++++++++2893717
    SSD tested on the server23G1168279855132535240756221104949966684728++++++++16++++++++++++++++++++++++++++++++++++++++++++++++
    striped-2-SSDs-speed-on-server23G1198039968062739301202251115999887343423++++++++16++++++++++++++++++++++++++++++++++++++++++++++++
    IB-QDR-NFSoRDMA to 2 striped ssds128128M187368995923613632602121176537991081013569373.238161495416++++++++20493171427017++++++++2869316
    IB-DDR-NFSoRDMA-to-RAIDz-3-ssd128128M18413099681531383346953517307299993987295214.1101617566++++++++146211179761954410154911
    IB-QDR-NFSoRDMA-to-RAIDz-3-ssd128128M184594996884363733984534175052991018267464801.22516135119++++++++123717135415++++++++143615
    ZFS-RAIDz-3-ssds-on-server23G12682499766040793521835311128499815423415484.11816++++++++++++++++++++++++2982199++++++++++++++++
    btrfs-3-ssds-on-server23G1232339941698214170231141090299842748016++++++++16++++++++++++++++++++++++++++++++++++++++++++++++
    IB-QDR-NFSoRDMA-to-BTRfs-3-ssd128128M18238696368255262058132417485099532123346632.532161442516++++++++21168141481117++++++++2012712

    Here is a simplified version of the above table.

    Ethernet vs InfiniBand
    Terse Summary
    Sequential OutputSequential InputRandom SeeksSequential Create Random Create
    Per CharPer BlockRewritePer ChrBlock CreateReadDeleteCreateReadDelete
    M/secM/secM/SecM/secM/Secthousands /sec per sec per secper secper secper secper sec
    Bonnie++ tests of a client to a server
    TEST TO LOCAL 7200rpm HD on client?? NFS1gB-to-ssd9379401051260.15 116320041151142241115
    40Gb/s InfiniBand QDR-IB-NFS to a 7200rpm HD817340981140.14 115+++++115113+++++114
    1Gb/s ethernet to SSD on server112114881231477.3 1855+++++4340197523275243
    10Gb/s InfiniBand SDR to an SSD1733722051587028.3 4580+++++84014461+++++19318
    40Gb/s InfiniBand QDR-IB-NFS to an SSD18550824216970212 6631+++++95216466+++++21849
    File System Test on the Server
    SSD tested on the server117551241110667+++++++++++++++++++++++++++++++++
    Adaptec 6805e RAID 2 striped SSDs on the server120681301112873+++++++++++++++++++++++++++++++++++
    ZFS-RAIDz-3-ssds-on-server1277663521118155.5+++++++++++++++29821++++++++++
    btrfs-3-ssds-on-server123417170109427+++++++++++++++++++++++++++++++++++
    File System Test from an NFS Client
    IB-QDR-NFSoRDMA-to-BTRfs-3-ssd1823682061755326.614425+++++2116814811+++++20127
    40Gb/s InfiniBand QDR NFS over RDM to SSD1845052511757059.210997+++++1761110897+++++28937
    40Gb/s InfiniBand QDR NFS over RDMA to 2 striped ssds18759232617710819.414954+++++2049314270+++++28693
    IB-DDR-NFSoRDMA-to-RAIDz-3-ssd1846823351739945.21756+++++4621797195441549
    IB-QDR-NFSoRDMA-to-RAIDz-3-ssd18568834017510184.81351+++++12371354+++++1436
    Different Hardware for Comparison - File System Test from an NFS Client
    FreeNAS 3x1TB SSD RAIDz 10Gb/s ethernet to client18611111791682302.94571+++++37615170+++++8361
    AMD 8-core FX to 1Gb/s AMD based eceSERV RAID 10 3 of SSD and 3 of HD64.096.145.6661011.96111325311142110627921382
    eceLinux1-InfiniHost-to-i3-raid10-ssd-10krpm-Serv-NFSoRDMA1581061361846093.13348+++++56203574207636845
    eceLinux2-ConnectX2-to-i3-raid10-ssd-10krpm-Serv-NFSoRDMA148871371866122.89577+++++154818249+++++12548

    Performance Test of ZFSonLinux CentOS 7

    Before deploying a new file server I performed some performance tests after finding Quartus compiles to be slower than expected.

    2 simultaneous compiles took 20 secs on the old file server (hybrid raid 10 with 3 of 1TB SSDs and 3 10k rpm HDs) with hardware RAID and 33 seconds on the ZFSonLinux server (4 of 1TB SSDs in RAIDz. Tests of 1, 4 or 8 compiles at a time were similarily affected.

    eceServ has a messed up RAID 10 array. It has 1 SSD in parallel with an SSD, 1 HD with a SSD in parallel and two HDs in parallel so performance is suboptimal from the original configuration - 3 SSDs with 3 HDs in parellel.

    fio test indicates that the network connection to old and new file servers is exactly the same (they have same switches and routers in the path and same motherboard and DDR4 RAM and model of SSDs ...

    The first 3 tests involving mounting the file system on a client machine with 3 different methods and the last test is a bonnie++ test of the file system on the server itself.

    The file system is 4 of 1TB Samsung 850 EVO SSDs in RAIDz with a pair of HDs on a raid controller for booting.

    All tests done with kernel kernel-3.10.0-327.28.2.el7.x86_64 except as noted below.

    Ethernet vs InfiniBand
    Terse Summary
    Sequential OutputSequential InputRandom SeeksSequential Create Random Create Quartus Compile
    Per CharPer BlockRewritePer ChrBlock CreateReadDeleteCreateReadDelete
    M/secM/secM/SecM/secM/Secthousands /sec per sec per secper secper secper secper sec
    ecelinux3- use FreeNAS 4TB HD RAIDz array29 sec
    ecelinux3- use local-ssd140307193126662++++++++++++++++++++++++++++++++21 sec
    1Gb/s ethernet, eceLinux3-to-servNew 4 ssd RAIDz lz4108106381101403.0151+++++1551515286162 33 sec
    1Gb/s eth, ecelinux3-to-servNew-ssd-single as ZFS vol no lz410199371091442.9178+++++1841765343190 32 sec with lz4
    1Gb/s eth, ecelinux3-to-servNew-ssd as ext4-1Gbe9810535971022.51046+++++3203130959364792 32 sec
    1Gb/s ecelinux3-to-serv hybrid raid 10108109501131409.41456326762577142443632993 23 sec
    1Gb/s ethernet ecelinux3-to-servNew-tmp-directory-1Gbe - regular HD ext48986321151362.8743+++++10756085299994
    QDR InfiniBand, eceLinux3-to-servNew-IB 4 ssd RAIDz lz4141159784712821863.8158+++++16215720411167
    QDR RDMA IB, eceLinux3-to-servNew-RDMA 4 ssd RAIDz1421968104212934564.0167+++++16916524618172 37 sec
    On the server eceServ-RAID10x6-hybridSSD-HD (128G blocks)1883571781674132.3++++++++++++++++++++++++++++++
    On the server (128G blocks), eceserv-new.uwaterloo.ca1756665381682596+++++24567+++++2553721229+++++29492
    ecelinux3-to-servNew-ssd-zfs-rdma kernel-3.10.0-327.28.3.el7.x86_64 1422012111413032256.6351+++++33831925725350 27 sec
    ecelinux3-to-servNew-home-zfs-rdma kernel-3.10.0-327.28.3.el7.x86_64 1431877116712832636.4318+++++31129825953318 28 sec
    
    
    Tests - Dec 2016 - ZFSonLinux server setup using ashift=12 4 x 1TB Samsung SSDs
    eceLinux1 to eceServ1 using QDR RDMA
      read : io=12015MB, bw=410086KB/s, iops=6407, runt= 30001msec
        slat (usec): min=4, max=213, avg=10.07, stdev= 6.41
        clat (usec): min=411, max=2666, avg=1210.97, stdev=193.85
         lat (usec): min=489, max=2673, avg=1221.10, stdev=193.78
       READ: io=48062MB, aggrb=1602.2MB/s, minb=409912KB/s, maxb=410318KB/s, mint=30000msec, maxt=30001msec
    
    Test #2
      read : io=12083MB, bw=412404KB/s, iops=6443, runt= 30001msec
        slat (usec): min=4, max=944, avg= 9.95, stdev= 6.71
        clat (usec): min=377, max=3887, avg=1204.72, stdev=200.28
         lat (usec): min=391, max=3897, avg=1214.72, stdev=200.27
       READ: io=48335MB, aggrb=1611.1MB/s, minb=412280KB/s, maxb=412607KB/s, mint=30001msec, maxt=30001msec
    
    
    
    
    older fio tests - 1Gb/s bandwidth to both servers is the same speed (same switch & router - so duhh)
    
    fio test to eceServ-new ssd setup as ZFS with lz4
    read : io=841024KB, bw=28026KB/s, iops=437, runt= 30009msec
        slat (usec): min=9, max=167, avg=64.01, stdev=19.53
        clat (msec): min=1, max=38, avg=18.00, stdev= 2.80
         lat (msec): min=1, max=38, avg=18.06, stdev= 2.81
    
    fio test to eceServ-new ssd setup as ZFS with lz4 using RDMA
    read : io=18999MB, bw=648478KB/s, iops=10132, runt= 30001msec
        slat (usec): min=7, max=249, avg=11.05, stdev= 2.03
        clat (usec): min=149, max=10420, avg=742.97, stdev=192.64
         lat (usec): min=163, max=10430, avg=754.14, stdev=192.78
    
    fio test to eceServ hybrid RAID 10 array
    read : io=844160KB, bw=28131KB/s, iops=439, runt= 30008msec
        slat (usec): min=10, max=356, avg=68.07, stdev=19.51
        clat (msec): min=4, max=30, avg=17.79, stdev= 2.02
         lat (msec): min=4, max=30, avg=17.86, stdev= 2.02
    
    fio test to FreeNAS backup server using 4TB HDs in RAIDz
      read : io=750464KB, bw=25000KB/s, iops=390, runt= 30019msec
        slat (usec): min=11, max=238, avg=65.48, stdev=19.84
        clat (msec): min=5, max=37, avg=19.97, stdev= 2.58
         lat (msec): min=5, max=37, avg=20.04, stdev= 2.58
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ecelinux3-to 63776M 143043  99 1876643  53 1167472  56 127623  99 3263335  62  6431   8
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   318   3 +++++ +++   311   2   298   3 25953  39   318   1
    ecelinux3-to-servNew-home-zfs-rdma new kernel,63776M,143043,99,1876643,53,1167472,56,127623,99,3263335,62,6431.5,8,16,318,3,+++++,+++,311,2,298,3,25953,39,318,1
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ecelinux3-to 63776M 142080  99 2011716  57 1114007  56 129575  99 3224767  61  6573   9
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   351   4 +++++ +++   338   2   319   3 25725  41   350   2
    ecelinux3-to-servNew-ssd-zfs-rdma new kernel,63776M,142080,99,2011716,57,1114007,56,129575,99,3224767,61,6573.3,9,16,351,4,+++++,+++,338,2,319,3,25725,41,350,2
    
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ecelinux3-to 63776M 140262  99 307274  15 192950  17 125994  99 662069  29 +++++ +++
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    ecelinux3-to-local-ssd,63776M,140262,99,307274,15,192950,17,125994,99,662069,29,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ecelinux3-to 63776M 100736  74 99212   4 37053   4 108801  87 143548   6  2880   7
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   178   1 +++++ +++   184   1   176   1  5343  16   190   0
    ecelinux3-to-servNew-ssd-single-as-ZFSvol-1Gbe,63776M,100736,74,99212,4,37053,4,108801,87,143548,6,2879.7,7,16,178,1,+++++,+++,184,1,176,1,5343,16,190,0
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ecelinux3-to 63776M 98328  74 105344   5 34592   3 96947  80 101594   4  2519   4
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  1046   9 +++++ +++  3203  14  1309  12  5936  17  4792  17
    SSD as ext4 partition ecelinux3-to-servNew-ssd-ext4-1Gbe,63776M,98328,74,105344,5,34592,3,96947,80,101594,4,2519.2,4,16,1046,9,+++++,+++,3203,14,1309,12,5936,17,4792,17
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ecelinux3-to 63776M 89070  67 86473   4 31980   3 114609  90 135646   6  2821   4
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   743   6 +++++ +++  1075   6   608   6  5299  16   994   5
    ecelinux3-to-servNew-tmp-directory-1Gbe,63776M,89070,67,86473,4,31980,3,114609,90,135646,6,2821.0,4,16,743,6,+++++,+++,1075,6,608,6,5299,16,994,5
    
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceServ-RAI 128448M 188410  99 356675  13 177819  12 167380  89 412860  18  2257   4
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    eceServ-RAID10x6-hybridSSD-HD,128448M,188410,99,356675,13,177819,12,167380,89,412860,18,2256.7,4,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceLinux3-to 63776M 107629  79 106149   5 38316   4 109592  87 140060   6  2973   6
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   151   1 +++++ +++   155   1   151   1  5286  16   162   0
    eceLinux3-to-servNew,63776M,107629,79,106149,5,38316,4,109592,87,140060,6,2973.0,6,16,151,1,+++++,+++,155,1,151,1,5286,16,162,0
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceLinux3-to 63776M 142215  99 1967963  54 1042398  50 128675  99 3455573  66  4030   5
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   167   1 +++++ +++   169   1   165   1 24618  39   172   0
    eceLinux3-to-servNew-RDMA,63776M,142215,99,1967963,54,1042398,50,128675,99,3455573,66,4029.8,5,16,167,1,+++++,+++,169,1,165,1,24618,39,172,0
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceLinux3-to 63776M 140834  99 1596642  57 846652  45 128086  99 2186268  45  3776   5
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   158   2 +++++ +++   162   1   157   2 20411  40   167   1
    eceLinux3-to-servNew-IB,63776M,140834,99,1596642,57,846652,45,128086,99,2186268,45,3776.0,5,16,158,2,+++++,+++,162,1,157,2,20411,40,167,1
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceserv-new 128168M 175209  98 666440  97 538202  94 167666  98 2596462  87 +++++ +++
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 24567  99 +++++ +++ 25537  99 21229 100 +++++ +++ 29492 100
    eceserv-new.uwaterloo.ca,128168M,175209,98,666440,97,538202,94,167666,98,2596462,87,+++++,+++,16,24567,99,+++++,+++,25537,99,21229,100,+++++,+++,29492,100
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ecelinux3-to 63776M 107826  79 108519   5 49646   5 113269  89 139719   6  9374  13
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  1456  11 32676  14  2577  12  1424  11  4363  14  2993  12
    ecelinux3-to-serv,63776M,107826,79,108519,5,49646,5,113269,89,139719,6,9374.2,13,16,1456,11,32676,14,2577,12,1424,11,4363,14,2993,12
    
    

    Table of all data

    ServerSizeSequential OutputSequential InputRandom SeeksSequential Create Random Create
    Per CharPer BlockRewritePer ChrBlockRandom SeeksFiles CreateReadDeleteCreateReadDelete
    K/sec%CPK/sec%CPK/Sec%CPK/sec%CPK/Sec%CP/sec%CP per sec%CP per sec%CPper sec%CPper sec%CPper sec%CPper sec%CP
    eceServBoot RAID0 600G SAS and 500G 7200rpm - Nov201463704M8206580737479388396970819219724822761.3516
    eceServ Home 6xSAS 15k rpm RAID 10 - Nov201463704M10342498561295521795382810006796463026541559.61016
    eceServ RAID10 3 x SSD 3xVelociraptor - Dec 201463704M10479799554304552173553610786298529560652390.31516
    eceServHome-1SSD - Nov 201463704M10223798551985521942372910267697518187622003.31316
    ecewo3 SSDx1 - Nov201462G5319185109845238285413635239654378658
    ecewo32SSD31976M58493816087320312232157524907727215189.3116
    ecewo3-WOdb-NFS-share - Nov201462G551909759859922796539248567577272865.91416101692262831884985393555156285
    eceLinux3-NFS-serv - Nov201463840M45897754622152972876144398712838846.0316371276533752360277623932
    eceLinux2-NFS-ServSSDs - Dec 201463840M792819594125746257108314299108043121942.7716111132548711534110762769613694
    eceLinux3-NFS-serv63840M783499493492745249108307699110195121279.5516112242531611434109862795613663
    eceLinux3-NFS-Serv-1SSD63840M617079497820944910106283099104103101606.17169113191149331910420742010191
    eceLinux5-NFS-ServSSDs - Dec 201463G93304799539312452861510400895108949221949.4416114632577411824114242782314282
    eceLinux5-NFS-serv63G8795686889061442920139812494103878171288.9616110642569311453108842791413613
    eceLinux5-NFS-serv1SSD63G8501888886581543126139913399105891181625.261610995257251113596252772312653
    eceLinux4-NFS-ServSSDs - Dec 201415G7625086105731852504148401199145489156006.42616851814091241004785882361910047
    eceLinux4-NFS-serv15G798078996881750348148050599144033154901.92316866713994171003787972338910007
    eceLinux4-NFS-Serv-1SSD15G760198788916750145148380998131553164608.52216863714247171000887182316119968
    Tests of InfiniBand using ConnectX-2 cards directly connecting 2 computers.
    
    fio --rw=randread --bs=64k --numjobs=4 --iodepth=8 --runtime=30 --time_based --loops=1 --ioengine=libaio --direct=1 --invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap --exitall --name task1 --filename=/testing/junk/1.txt --size=10000000
    
    
    Testing local file system (7200 rpm HD)
       READ: io=8899.3MB, aggrb=303 680KB/s, minb=75 866KB/s, maxb=76014KB/s, mint=30003msec, maxt=30008msec
         or 8.9GB, 304MB/s, 75.9 MB/s, 6.7ms lat, 1187 iops
      read : io=2227.6MB, bw=76014KB/s, iops=1187, runt= 30007msec
        slat (usec): min=2, max=227, avg= 6.72, stdev= 7.00
        clat (usec): min=224, max=100657, avg=6680.40, stdev=2089.69
         lat (usec): min=229, max=100720, avg=6687.19, stdev=2090.73
    
    NFS over RDMA to a striped pair of SSDs
       READ: io=94053MB, aggrb=3134.2MB/s, minb=802537KB/s, maxb=802575KB/s, mint=30001msec, maxt=30001msec
    or 94G, 3.1G/s, 803MB/s, lat 614us, 12539 iops
      read : io=23513MB, bw=802538KB/s, iops=12539, runt= 30001msec
        slat (usec): min=3, max=1146, avg= 7.57, stdev= 3.28
        clat (usec): min=79, max=1876, avg=606.19, stdev=90.78
         lat (usec): min=92, max=2045, avg=613.80, stdev=90.77
    
    
    NFS mount with QDR ConnectX-2 cards: USING NFSoRDMA!!
       READ: io=94050MB, aggrb=3134.1MB/s, minb=802 245KB/s, maxb=803017KB/s, mint=30001msec, maxt=30001msec
      or 94GB, 3.1GB/s, 802MB/s, 615us, 12535 iops   
      read : io=23504MB, bw=802245KB/s, iops=12535, runt= 30001msec
        slat (usec): min=4, max=474, avg= 7.43, stdev= 1.85
        clat (usec): min=72, max=8837, avg=607.94, stdev=113.73
         lat (usec): min=81, max=8848, avg=615.40, stdev=113.59
    
    
    NFS mounted via RDMA using SDR or 10Gb/s with InfiniHost III cards
       READ: io=22319MB, aggrb=761758KB/s, minb=190363KB/s, maxb=190654KB/s, mint=30002msec, maxt=30003msec
    or 762MB/s, 190MB/s, 2.57ms lat, 2978 iops
      read : io=5586.2MB, bw=190654KB/s, iops=2978, runt= 30003msec
        slat (usec): min=4, max=835, avg= 7.00, stdev= 3.83
        clat (usec): min=328, max=8411, avg=2559.16, stdev=486.51
         lat (usec): min=336, max=8418, avg=2566.22, stdev=486.45
    
    NFS mounted via RDMA using DDR or 20Gb/s with InfiniHost III cards
       READ: io=24430MB, aggrb=833785KB/s, minb=208410KB/s, maxb=208492KB/s, mint=30001msec, maxt=30003msec
      or 24.4G, 834MB/s, 208MB/s, 2.4ms, 3256 iops
      read : io=6107.2MB, bw=208436KB/s, iops=3256, runt= 30003msec
        slat (usec): min=3, max=617, avg= 7.00, stdev= 3.50
        clat (usec): min=684, max=6448, avg=2355.94, stdev=296.73
         lat (usec): min=695, max=6454, avg=2362.98, stdev=296.70
    
    
    NFS mount over QDR ConnectX-2 cards:  NFSoIB
    Run status group 0 (all jobs):
       READ: io=15 707MB, aggrb=536 085KB/s, minb=133 964KB/s, maxb=134 139KB/s
        or 15.7GB,  536MB/s, 134MB/s
    
    Testing with DDR InfiniHost III cards connected by SDR switch
       READ: io=13 788MB, aggrb=470 564KB/s, minb=117 607KB/s, maxb=117 725KB/s, mint=30004msec, maxt=30005msec
         or 13.8GB, 470MB/s, 117MB/s, 4.2ms, 1840 iops   DDR at SDR speed
    read : io=3449.5MB, bw=117725KB/s, iops=1839, runt= 30004msec
        slat (usec): min=3, max=861, avg=11.37, stdev= 8.48
        clat (usec): min=244, max=9225, avg=4154, stdev=896
         lat (usec): min=261, max=9237, avg=4165, stdev=897
    
    Testing DDR InfiniHost III connected directly
    rate:            20 Gb/sec (4X DDR)
    READ: io=13854MB, aggrb=472819KB/s, minb=118104KB/s, maxb=118301KB/s, mint=30003msec, maxt=30005msec
      or 13.9GB, 473GB/s, 118MB/s, lat 4.1ms, 1845iops   DDR
    read : io=3460.6MB, bw=118105KB/s, iops=1845, runt= 30004msec
        slat (usec): min=4, max=1294, avg=11.57, stdev=11.96
        clat (usec): min=1204, max=9557, avg=4131.76, stdev=750.82
         lat (usec): min=1361, max=9566, avg=4143.40, stdev=751.04
    
    fio done on the local file system to 7200 rpm HD
       READ: io=8626.6MB, aggrb=294382KB/s, minb=73340KB/s, maxb=74071KB/s, mint=30002msec, maxt=30007msec
      read : io=2156.2MB, bw=73579KB/s, iops=1149, runt= 30007msec
        slat (usec): min=3, max=5737, avg=11.90, stdev=32.52
        clat (usec): min=423, max=173382, avg=6898.98, stdev=3536.77
         lat (usec): min=435, max=173416, avg=6911.03, stdev=3536.91
    
    fio done over QDR IB
       READ: io=16 636MB, aggrb=567778KB/s, minb=141899KB/s, maxb=141999KB/s, mint=30002msec, maxt=30003msec
         or 16.7GB, 568 MB/s, 142MB/s, 3.4ms lat, 2218 iops QDR
      read : io=4160.6MB, bw=142000KB/s, iops=2218, runt= 30003msec
        slat (usec): min=4, max=1216, avg=11.92, stdev=11.06
        clat (usec): min=56, max=170945, avg=3433.91, stdev=2056.65
         lat (usec): min=266, max=171043, avg=3445.91, stdev=2058.37
    
    Testing with 1Gb ethernet between machines
       READ: io=3 289.7MB, aggrb=112 210KB/s, minb=28 048KB/s, maxb=28067KB/s, mint=30003msec, maxt=30015msec
         or 3.2GB, 112MB/s, 28MB/s, 17.8ms, 438 iops   1Gb eth
     read : io=842048KB, bw=28058KB/s, iops=438, runt= 30011msec
        slat (usec): min=4, max=73, avg=16.43, stdev= 4.73
        clat (usec): min=795, max=36337, avg=17793, stdev=3057
         lat (usec): min=805, max=36346, avg=17809, stdev=3057
    
    Testing between eceLinux1 and FreeNAS server at 10Gb/s
    READ: io=5944.7MB, aggrb=202 823KB/s, minb=50 662KB/s, maxb=50754KB/s, mint=30007msec, maxt=30010msec
      or 5.9G, 202MB/s, 51MB/s, 9.7ms, 793 iops 10Gb/s eth
     read : io=1487.5MB, bw=50754KB/s, iops=793, runt= 30010msec
        slat (usec): min=4, max=184, avg=15.30, stdev= 5.81
        clat (usec): min=383, max=56617, avg=9636.57, stdev=3464.32
         lat (usec): min=394, max=56628, avg=9652.00, stdev=3465.27
    
    Testing eceLinux1 (10Gb/s) to eceServ (1Gb/s, hybrid SSD & 10k rpm striped FS)
     READ: io=3348.6MB, aggrb=114226KB/s, minb=28473KB/s, maxb=28606KB/s, mint=30011msec, maxt=30018msec
      or 3.3G, 114MB/s, 28.5MB/s
    read : io=857536KB, bw=28574KB/s, iops=446, runt= 30011msec
        slat (usec): min=4, max=106, avg=19.05, stdev= 3.49
        clat (msec): min=4, max=29, avg=17.31, stdev= 2.47
         lat (msec): min=4, max=29, avg=17.33, stdev= 2.47
    
    NFS mount over 100Mb connection
    Run status group 0 (all jobs):
       READ: io=345 728KB, aggrb=11 457KB/s, minb=2 855KB/s, maxb=2 888KB/s
        or  346MB, 11MB/s, 2.86 MB/s, 2.89MB/s  100Mb eth
    
       READ: io=345792KB, aggrb=11458KB/s, minb=2858KB/s, maxb=2888KB/s, mint=30110msec, maxt=30177msec
        or 346M, 11.5MB/s, 2.28MB/s, lat 174 ms, 45 iops
      read : io=86976KB, bw=2888.7KB/s, iops=45, runt= 30110msec
        slat (usec): min=19, max=148, avg=95.14, stdev=20.98
        clat (msec): min=7, max=347, avg=173.71, stdev=37.16
         lat (msec): min=7, max=347, avg=173.81, stdev=37.17
    
    
    100Mb/s ethernet with a very old switch
       READ: io=345984KB, aggrb=11465KB/s, minb=2854KB/s, maxb=2886KB/s, mint=30099msec, maxt=30177msec
      or 35M, 11.5Mb/s, 2.9MB/s, lat 176ms, 44 iops
      read : io=86272KB, bw=2859.5KB/s, iops=44, runt= 30171msec
        slat (usec): min=4, max=127, avg=11.99, stdev= 8.19
        clat (msec): min=50, max=334, avg=176.25, stdev=21.05
         lat (msec): min=50, max=334, avg=176.26, stdev=21.05
       
    
    10Mb/s ethernet via switch downgraded to 10Mb/s per port
       READ: io=36416KB, aggrb=1145KB/s, minb=279KB/s, maxb=303KB/s, mint=30830msec, maxt=31778msec
       or 36MB, 1.2MB/s, 279kB/s, 1.80s, 4 iops
      read : io=8896.0KB, bw=286660B/s, iops=4, runt= 31778msec
        slat (usec): min=37, max=146, avg=102.63, stdev=15.05
        clat (msec): min=948, max=3460, avg=1796.61, stdev=313.59
         lat (msec): min=948, max=3460, avg=1796.72, stdev=313.59
    
       
    NFS over 10Mb/s ethernet using a HUB !!
       READ: io=33152KB, aggrb=1041KB/s, minb=253KB/s, maxb=284KB/s, mint=30598msec, maxt=31821msec
         or 33MB, 1.0MB/s, 260kB/s, 1.9s latency, 4 iops
      read : io=8192.0KB, bw=266423B/s, iops=4, runt= 31486msec
        slat (usec): min=55, max=205, avg=105.20, stdev=18.38
        clat (msec): min=767, max=2647, avg=1920.61, stdev=227.36
         lat (msec): min=767, max=2647, avg=1920.72, stdev=227.36
    
    Updated Test - eceServ still 3 x SSD RAID 10 with 3 x 10k RPM Velociraptors but now i3 on SuperMicro with 64G
      and IB ConnectX-2 to clients
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceLinux2-t 128304M 148413  75 86777   5 136715  14 185525  99 612089  39  2773  13
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  9577  10 +++++ +++ 15481  11  8249  10 +++++ +++ 12548  13
    eceLinux2-to-i3-raid10-ssd-10krpm-Serv-NFSoRDMA,128304M,148413,75,86777,5,136715,14,185525,99,612089,39,2772.8,13,16,9577,10,+++++,+++,15481,11,8249,10,+++++,+++,12548,13
    
    Updated Test - eceServ still 3 x SSD RAID 10 with 3 x 10k RPM Velociraptors but now i3 on SuperMicro with 64G
      and IB InfiniHost III to i7 client
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceLinux1-I 128304M 157857  79 105521   6 136272  12 183754  99 608778  17  3117  11
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  3348   5 +++++ +++  5620   5  3574   6 20763   9  6845   6
    eceLinux1-InfiniHost-to-i3-raid10-ssd-10krpm-Serv-NFSoRDMA,128304M,157857,79,105521,6,136272,12,183754,99,608778,17,3117.1,11,16,3348,5,+++++,+++,5620,5,3574,6,20763,9,6845,6
    
    
    First test - IB connection, NFSoIB - not via RDMA - to a 7200 rpm HD
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    QDR-IB-NFS  128128M 80589  54 73291   7 40351   7 97960  61 113744   8 140.2   0
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   115   2 +++++ +++   115   1   113   2 +++++ +++   114   1
    QDR-IB-NFS,128128M,80589,54,73291,7,40351,7,97960,61,113744,8,140.2,0,16,115,2,+++++,+++,115,1,113,2,+++++,+++,114,1
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    IB-QDR-NFSo 128128M 183721  98 505062  33 251227  24 175136  99 704626  45  9183  45
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 10997  15 +++++ +++ 17611  18 10897  18 +++++ +++ 28937  17
    IB-QDR-NFSoRDMA-to-ssd,128128M,183721,98,505062,33,251227,24,175136,99,704626,45,9183.0,45,16,10997,15,+++++,+++,17611,18,10897,18,+++++,+++,28937,17
    
    NFSoRDMA to a CentOS 7 using ZFS with RAIDz using 3 x 250G SSDs using InfiniHost III cards
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    IB-DDR-NFSo 128128M 184130  99 681531  38 334695  35 173072  99 993987  29  5214  10
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  1756   6 +++++ +++  1462  11  1797   6 19544  10  1549  11
    IB-DDR-NFSoRDMA-to-RAIDz-3-ssd,128128M,184130,99,681531,38,334695,35,173072,99,993987,29,5214.1,10,16,1756,6,+++++,+++,1462,11,1797,6,19544,10,1549,11
    
    
    NFSoRDMA to a CentOS 7 using ZFS with RAIDz using 3 x 250G SSDs using QDR ConnectX-2 cards
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    IB-QDR-NFSo 128128M 184594  99 688436  37 339845  34 175052  99 1018267  46  4801  25
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  1351  19 +++++ +++  1237  17  1354  15 +++++ +++  1436  15
    IB-QDR-NFSoRDMA-to-RAIDz-3-ssd,128128M,184594,99,688436,37,339845,34,175052,99,1018267,46,4801.2,25,16,1351,19,+++++,+++,1237,17,1354,15,+++++,+++,1436,15
    
    Test RAIDz - 3 ssd file system on the server
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ZFS-RAIDz-3-ssd 23G 126824  99 766040  79 352183  53 111284  99 815423  41  5484  18
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ 29821  99 +++++ +++ +++++ +++
    ZFS-RAIDz-3-ssds-on-server,23G,126824,99,766040,79,352183,53,111284,99,815423,41,5484.1,18,16,+++++,+++,+++++,+++,+++++,+++,29821,99,+++++,+++,+++++,+++
    
    btrfs of 3 ssds in raid 1 on the file server
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    btrfs-3-ssds-on 23G 123233  99 416982  14 170231  14 109029  98 427480  16 +++++ +++
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    btrfs-3-ssds-on-server,23G,123233,99,416982,14,170231,14,109029,98,427480,16,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    
    BTRFS with 3 SSDs in RAID 1 on the file server, tested from the client over QDR IB NFSoRDMA
      NOTE - the server crashed the first time I tried this test.  In the past my experience with BTRFS was that it wasn't yet stable
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    IB-QDR-NFSo 128128M 182386  96 368255  26 205813  24 174850  99 532123  34  6633  32
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 14425  16 +++++ +++ 21168  14 14811  17 +++++ +++ 20127  12
    IB-QDR-NFSoRDMA-to-BTRfs-3-ssd,128128M,182386,96,368255,26,205813,24,174850,99,532123,34,6632.5,32,16,14425,16,+++++,+++,21168,14,14811,17,+++++,+++,20127,12
    
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    QDR-IB-NFS- 128128M 185401  98 507588  31 241786  17 168968  99 702065  22 12098  18
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  6631  15 +++++ +++  9521  20  6466  18 +++++ +++ 21849  18
    QDR-IB-NFS-to-ssd,128128M,185401,98,507588,31,241786,17,168968,99,702065,22,12098.4,18,16,6631,15,+++++,+++,9521,20,6466,18,+++++,+++,21849,18
    
    Test SSD on the server with Bonnie++
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    SSD-speed-on-se 23G 116827  98 551325  35 240756  22 110494  99 666847  28 +++++ +++
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    SSD-speed-on-server,23G,116827,98,551325,35,240756,22,110494,99,666847,28,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    IB-QDR-NFSo 128128M 187368  99 592361  36 326021  21 176537  99 1081013  56  9373  38
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 14954  16 +++++ +++ 20493  17 14270  17 +++++ +++ 28693  16
    IB-QDR-NFSoRDMA-to-2-striped-ssd,128128M,187368,99,592361,36,326021,21,176537,99,1081013,56,9373.2,38,16,14954,16,+++++,+++,20493,17,14270,17,+++++,+++,28693,16
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    SSD-speed-on-se 23G 119803  99 680627  39 301202  25 111599  98 873434  23 +++++ +++
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    striped-2-SSDs-speed-on-server,23G,119803,99,680627,39,301202,25,111599,98,873434,23,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    NFS1gB-to-s 128128M 111587  59 114273   2 88361   3 123485  71 147153   2  7350   8
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  1855   8 +++++ +++  4340   5  1975   7  2327  10  5243   3
    NFS1gB-to-ssd,128128M,111587,59,114273,2,88361,3,123485,71,147153,2,7349.8,8,16,1855,8,+++++,+++,4340,5,1975,7,2327,10,5243,3
    
    InfiniHost III HBAs connected with a SDR switch
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    IB-SDR-to-s 128128M 172504  98 372110  15 205330  14 158240  99 702011  23  8276  33
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  4580  14 +++++ +++  8401  17  4461  16 +++++ +++ 19318  14
    IB-SDR-to-ssd,128128M,172504,98,372110,15,205330,14,158240,99,702011,23,8275.7,33,16,4580,14,+++++,+++,8401,17,4461,16,+++++,+++,19318,14
    
    
    This was testing local HD on the client machine?? NFS connection lost???
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    NFS1gB-to-s 128128M 93253  52 78956   3 39927   2 104771  61 126355   3 146.6   0
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   116   2 32004  28   115   1   114   2  2241  23   115   1
    NFS1gB-to-ssd,128128M,93253,52,78956,3,39927,2,104771,61,126355,3,146.6,0,16,116,2,32004,28,115,1,114,2,2241,23,115,1
    
    
    OLD
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceLinux3-NF 63840M 61707  94 97820   9 44910  10 62830  99 104103  10  1606   7
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   911   3  1911   4   933   1   910   4  2074  20  1019   1
    eceLinux3-NFS-Serv-1SSD,63840M,61707,94,97820,9,44910,10,62830,99,104103,10,1606.1,7,16,911,3,1911,4,933,1,910,4,2074,20,1019,1
    
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceLinux4-NFS-S 15G 76019  87 88916   7 50145  14 83809  98 131553  16  4608  22
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   863   7 14247  17  1000   8   871   8  2316  11   996   8
    eceLinux4-NFS-Serv-1SSD,15G,76019,87,88916,7,50145,14,83809,98,131553,16,4608.5,22,16,863,7,14247,17,1000,8,871,8,2316,11,996,8
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceLinux4-NFS-s 15G 79807  89 96881   7 50348  14 80505  99 144033  15  4902  23
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   866   7 13994  17  1003   7   879   7  2338   9  1000   7
    eceLinux4-NFS-serv,15G,79807,89,96881,7,50348,14,80505,99,144033,15,4901.9,23,16,866,7,13994,17,1003,7,879,7,2338,9,1000,7
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceLinux3-NF 63840M 78349  94 93492   7 45249  10 83076  99 110195  12  1279   5
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  1122   4  2531   6  1143   4  1098   6  2795   6  1366   3
    eceLinux3-NFS-serv,63840M,78349,94,93492,7,45249,10,83076,99,110195,12,1279.5,5,16,1122,4,2531,6,1143,4,1098,6,2795,6,1366,3
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceLinux5-NFS-s 63G 87956  86 88906  14 42920  13 98124  94 103878  17  1289   6
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  1106   4  2569   3  1145   3  1088   4  2791   4  1361   3
    eceLinux5-NFS-serv,63G,87956,86,88906,14,42920,13,98124,94,103878,17,1288.9,6,16,1106,4,2569,3,1145,3,1088,4,2791,4,1361,3
    
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ecewo32SSD   31976M 58493  81 60873  20 31223  21 57524  90 77272  15 189.3   1
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    ecewo32SSD,31976M,58493,81,60873,20,31223,21,57524,90,77272,15,189.3,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    
    
    ECEServ++ -m eceServBoot -d /tmp -u root
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceServBoot  63704M 82065  80 73747   9 38839   6 97081  92 197248  22 761.3   5
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    eceServBoot,63704M,82065,80,73747,9,38839,6,97081,92,197248,22,761.3,5,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    
    ECEServ ./bonnie++ -m eceServBoot -d /home/sysadmin/junk -u root
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceServBoot  63704M 103424  98 561295  52 179538  28 100067  96 463026  54  1560  10
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    eceServBoot,63704M,103424,98,561295,52,179538,28,100067,96,463026,54,1559.6,10,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    
    
    eceWO3 - Adaptec 6805e RAID  840 EVO 250G 6GB/sec and WD10EADS-65M2B0 3GB/sec in RAID mirror
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ecewo3          62G 53191  85 109845  23 82854  13 63523  96 543786  58 +++++ +++
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    ecewo3,62G,53191,85,109845,23,82854,13,63523,96,543786,58,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ecewo3-WOdb-NFS 62G 55190  97 59859   9 22796   5 39248  56 75772   7  2866  14
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  1016   9 22628  31   884   9   853   9  3555  15   628   5
    ecewo3-WOdb-NFS-share,62G,55190,97,59859,9,22796,5,39248,56,75772,7,2865.9,14,16,1016,9,22628,31,884,9,853,9,3555,15,628,5
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceLinux3-NF 63840M 45897  75 46221   5 29728   7 61443  98 71283   8 846.0   3
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   371   2   765   3   375   2   360   2   776   2   393   2
    eceLinux3-NFS-serv,63840M,45897,75,46221,5,29728,7,61443,98,71283,8,846.0,3,16,371,2,765,3,375,2,360,2,776,2,393,2
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    SuperM Cent 128168M 182710  99 1061873  29 311626  15 164701  97 331800   8  3547   7
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  6027  11 +++++ +++  6181  12  6127  12 +++++ +++ 11102   9
    SuperMicro FreeNAS 10Gbe with 4x 1TB SSD to SuperMicro 10Gbe CentOS 7,128168M,182710,99,1061873,29,311626,15,164701,97,331800,8,3547.2,7,16,6027,11,+++++,+++,6181,12,6127,12,+++++,+++,11102,9
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    Linux12i7-t 128168M 11464   7 10016   0  7156   0 14406   8 14368   0  1745   3
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  1095   0 28088   0  1638   0  1362   0  4419  36  2329   0
    Linux12i7-to-FreeNAS-SSD-at1Gb,128168M,11464,7,10016,0,7156,0,14406,8,14368,0,1745.3,3,16,1095,0,28088,0,1638,0,1362,0,4419,36,2329,0
    
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    Linux12i7-t 128168M 11468  14 11467   1  6901   1 12404  24 13175   1  1070   6
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   493  11 17779   9   677   9   499  12  1042  32  1005   0
    Linux12i7-to-FreeNAS-SSD-at100Mb,128168M,11468,14,11467,1,6901,1,12404,24,13175,1,1070.0,6,16,493,11,17779,9,677,9,499,12,1042,32,1005,0
    
    
    March 13, 2016 - AMD 8 core to eceServ with RAID 10 over 1Gb/s
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    Linux3AMD-to 63840M 63975  96 96058   7 45568  10 65800  99 100507  10  1960   7
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  1113   3  2531   6  1142   6  1106   4  2792   5  1382   5
    Linux3AMD-to-eceSERV-SSD,63840M,63975,96,96058,7,45568,10,65800,99,100507,10,1960.0,7,16,1113,3,2531,6,1142,6,1106,4,2792,5,1382,5
    
    

    Hard Drive Performance Tests - May 2014

    Using Bonnie++ (http://www.coker.com.au/bonnie++/ ) I've tested the performance of various systems.

    ./configure ./make ./bonnie++ -m ServerName -d /tmp -u regular_user

    CP is CPU Usage stats

    The tests are seqential file creation, sequentially delete files, create files in random order.

    ServerSizeSequential OutputSequential InputRandom Seeks
    Per Char
    K/sec
    %CPPer Block
    K/sec
    %CPRewrite
    K/Sec
    %CPPer Chr
    K/sec
    %CPBlock%CPRandom Seeks
    /sec
    %CPFiles
    woDB SSD RAID-1 - May201432112M68625992555176211622422753619827048219++++++++16
    woDB - May201432112M69039992602586311808022756929826173518++++++++16
    ieee - May201415G892429884128134210612775848319319425514.0316
    ieee 15k SAS RAID-1 - May201415G9468399154531226306718876119117162225588.8416
    Arbeau 300G RAID-1 10k rpm - May201415G80732847860483944610778457911213511500.5216
    arbeau-1SSD31G77683971082991453168137563492219774245804.92216
    Serv - May201463704M81685807355810392267912008716387119701.7516
    eceLinux1 - May201463840M489236269092153135786907591744028209.7016
    Home CPU xUbuntu 500G HDD M4A785 MB - Dec201431968M7511294828011632684974429899681615160.0116
    Admin - May201415464M47699746647222299341064674847034811285.0116
    System - May20146576M503057253666102486016371166372527144544.53516
    System WD 1TB Green - May20146576M53832744470011205131245640664885910213.81163082899
    System 40G PATA + SSD - May20146576M528347157261142666015414956671863154552.53516
    System 40G PATA (SSD stayed in array!) - May20146576M532267254672123229420499527371521144988.33316
    Linux4 - May201415G7772790777739342349774778611451313422.1216
    Web - May201415464M701109612221220435511267269809709813663.2316
    eceWebSSD - Nov 201439G76712961283492157215148407086229732325445.02716
    Mail - May201431G671639290264233750513687597611194015284.2116
    Home M4A785 500G WD HD xUbuntu Dec 201431968M7511294828011632684974429899681615160.0116
    System Bonnie++ Results
    Arbeau, one SSD in parallel with Velocraptor
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    eceLinux5-NFS-s 63G 85018  88 88658  15 43126  13 99133  99 105891  18  1625   6
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  1099   5  2572   5  1113   5   962   5  2772   3  1265   3
    eceLinux5-NFS-serv1SSD,63G,85018,88,88658,15,43126,13,99133,99,105891,18,1625.2,6,16,1099,5,2572,5,1113,5,962,5,2772,3,1265,3
    
    Arbeau, one SSD in parallel with Velocraptor
    arbeau-1SSD,31G,77683,97,108299,14,53168,13,75634,92,219774,24,5804.9,22,16,
    
    Web 3Ware raid-1 10k Velociraptor and 512M Samsung SSD
    eceWebSSD,39G,76712,96,128349,21,57215,14,84070,86,229732,32,5445.0,27,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    
    Serv raid-1 5xSAS HDs and 1x1TB Samsung SSD
    eceServHome-1SSD,63704M,102237,98,551985,52,194237,29,102676,97,518187,62,2003.3,13,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    
    Web 3Ware raid-1 10k Velociraptor
    Mail,31G,67163,92,90264,23,37505,13,68759,76,111940,15,284.2,1,16,
    
    Web 3Ware raid-1 10k Velociraptor
    Web,15464M,70110,96,122212,20,43551,12,67269,80,97098,13,663.2,3,16,
    
    ieee 15k rpm SAS MD RAID-1
    ieee,15G,94683,99,154531,22,63067,18,87611,91,171622,25,588.8,4,16
    
    System - MD RAID-1 array 1TB WD Green drives
    System,6576M,53832,74,44700,11,20513,12,45640,66,48859,10,213.8,1,16,30828,99,
    
    Linux4 (Adaptec mirror 7200rpm SATA
    Linux4,15G,77727,90,77773,9,34234,9,77477,86,114513,13,422.1,2,16,
    
    System re-run as root
    System,6576M,52834,71,57261,14,26660,15,41495,66,71863,15,4552.5,35,16,
    
    woDB
    woDB,32112M,68625,99,255517,62,116224,22,75361,98,270482,19,+++++,+++,16,
    
    woDB re-run as root
    woDB,32112M,69039,99,260258,63,118080,22,75692,98,261735,18,+++++,+++,16,
    
    ieee, CentOS 6 MD raid with LSI raid with 2 x 300G 15k rpm SAS and 2nd MD raid member is 7200rpm SATA
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ieee            15G 89242  98 84128  13 42106  12 77584  83 193194  25 514.0   3
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 
    ieee,15G,89242,98,84128,13,42106,12,77584,83,193194,25,514.0,3,16,
    	       	
    arbeau, CentOS 6 3Ware raid 1, 300G 10k rpm
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    Arbeau          15G 80732  84 78604   8 39446  10 77845  79 112135  11 500.5   2
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 
    Arbeau,15G,80732,84,78604,8,39446,10,77845,79,112135,11,500.5,2,16
    Serv 15k rpm SAS 6 drives raid 10
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    Serv         63704M 81685  80 73558  10 39226   7 91200  87 163871  19 701.7   5
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 
    Serv,63704M,81685,80,73558,10,39226,7,91200,87,163871,19,701.7,5,16
    
    Linux1, 7200rpm SATA
    eceLinux1,63840M,48923,62,69092,15,31357,8,69075,91,74402,8,209.7,0,16
    
    MD raid 160G SATA 7200 rpm
    Admin,15464M,47699,74,66472,22,29934,10,64674,84,70348,11,285.0,1,16,
    
    System - MD Raid 40G PATA without SSD?? Seems to have stayed in the array
    System,6576M,53226,72,54672,12,32294,20,49952,73,71521,14,4988.3,33,16,
    
    System, MD raid, 40G PATA drive and SSD
    System,6576M,50305,72,53666,10,24860,16,37116,63,72527,14,4544.5,35,16,