The following ports are automatically enabled on all ECE Nexus and Linux computers. Note that UDP from the wireless address space (172.x.y.z) is unlikely to connect to 129.97.x.y address space due to the way the wireless access works.
Port | Protocol | IP Range | Comment |
---|---|---|---|
10 000 to 10 010 | TCP and UDP | 129.97/16 | ECE 355 |
10 000 to 11 000 ON LINUX | TCP and UDP | 129.97/16, 172/8, 10/8 | ECE 355 |
22 222 | UDP | 129.97/16 | SIP Video |
22 224 | UDP | 129.97/16 | SIP Audio |
22 232 | UDP | 129.97.8/24 | Sip local video |
22 234 | UDP | 129.97.8/24 | Sip local audio |
4000 | UDP | 129.97/16 | SIP setup?? |
5060 | UDP | 129.97/16 | SIP basic audio |
5061 | UDP | 129.97/16 | SIP messages |
Echo Request | UDP | all internet | Ping Service |
UserAppSVC.exe | Any | 129.97.56/24 | UserApp Security Tracking |
NtTsyslog.exe | Any | 129.97.56/24 | Syslogging of security and Windows Messages |
This software is manually installed or configured:
SELECT TABLE_SCHEMA AS `Database`, SUM((data_length + index_length) / (1024 * 1024)) AS `Database_Size` FROM information_schema.TABLES GROUP BY table_schema ORDER BY `Database_Size` DESC; +--------------------+---------------+ | Database | Database_Size | +--------------------+---------------+ | yelp_db | 10944.2970 | | mysql | 2.4268 | | information_schema | 0.1560 | | ece356db_lab1 | 0.0624 | | sys | 0.0156 | | ece356db_zcwen | 0.0156 | | performance_schema | 0.0000 | +--------------------+---------------+
Feb 14 12:14:44 ECE-CIRCUITS8.NEXUS.UWATERLOO.CA NT:The search service has detected corrupted data files in the index {id=2501}. The service will attempt to automatically correct this problem by rebuilding the index. Context: Windows Application Details: The content index catalog is corrupt. 0xc0041801 (0xc0041801) Feb 14 12:14:55 ECE-CIRCUITS8.NEXUS.UWATERLOO.CA NT: The application cannot be initialized. Context: Windows Application Details: The content index data on disk is for the wrong version. (HRESULT : 0xc0041821) (0xc0041821) Feb 14 12:14:55 ECE-CIRCUITS8.NEXUS.UWATERLOO.CA NT: The index cannot be initialized. Details: The content index data on disk is for the wrong version. (HRESULT : 0xc0041821) (0xc0041821) Feb 14 12:15:25 ECE-CIRCUITS8.NEXUS.UWATERLOO.CA NT: The Windows Search Service is starting up and attempting to remove the old search index {Reason: Index Corruption}.
Using "regedit", expert the following Registry Key to a file for backup:- HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSLincensing Then delete" 1. HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSLicensing\HardwareID\ClientHWID 2. HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSLicensing\Store\LICENSE000
In Firefox goto the URL line and type: about:config scroll down to, browser.cache.disk.parent_directory and see if it is set to C:\temp. If it is missing or set incorrectly fix it. We preset these variables in a login script (q:\gen\etc\winlogon.cmd) that runs both firefox and thunderbird with the -CreateProfile "%USERID% N:\software\firefox" or "%USERID% N:\mail\thunderbird" options. In C:\Documents and settings\%USERID% there should be very little for firefox and thunderbird if everything is set correctly, however, any downloaded themes, or plugins are possibly added there.
zgrep -c "sshd.*Accepted password" from messages - weekly
Data 2018 to current - weekly totals - /504 for interval average 2 148 1448 6477 853 181 80 1173 829 763 409 32 134 2163 10 6 16 14 13 8 28 17 17 19 7 12 17 25 13 6 6 0 2 0 12 1 14 6 11 39 16 60 177 148 445 388 316 1257 1144 122 113 11 72 99 1322 1814 2664 3381 8928 886 848 6397 16788 4684 8011 13085 802 190 150 60 854 2070 6158 5815 4433 8462 653 4276 14127 5288 14055 4462 4170 920 336 52 68 244 2763 9807 5418 7010 3523 1623 3021 10038 10769 10928 6414 7959 2943 284 308 104 170 1261 8348 11271 18387 9509 16226 1287 1657 12852 22395 5811 10804 12478 1820 14653 3084 549 489 2626 7334 8750 9949 9918 17414 12652
Software_Name Software_Version ECE_Courses General Description ComSol MultiPhysics: NE241 Fall - 20 users at a time, ECE 375 Winter/Fall 13 users at a time, NE454B Fall ~50 students Silvaco TCad ECE 433, 730 W2016, W2017 C++11 gcc 4.7+ on Linux - eceUbuntu for ECE 453 March 2015 gcc 4.1.2 is on CentOS 5/6 - far too old AMANDA 2.5.0 BOUML 2.29 ECE251,ECE355 Dave's ColdFire Emulator 0.3.2 ECE354 Data Display Debugger (ddd) 3.3.11 ECE355 Doxygen 1.4.4 Electric + SFS 7.00 4th Year Projects C version of Electric VLSI CAD Package Electric / Java 8.03 4th Year Projects Java version of Electric VLSI CAD Package FireFox 1.5 FireFox Web Browser GCC for SPARC Systems 4.01 ECE251,ECE355 GCC optimized for Sun SPARC processors with Sun Forte backend code generator GCC for ColdFire 5307 3.4.6 ECE354 GCC cross compiler Mtx 1.2.18 Magnetic Tape eXecutive utility used to control tape drive MySQL Opera 8.50 Opera Web Browser Scons 0.96.1 SmartMon Tools 5.33 Sonnet 10.52 ECE 471(?) STAR 1.4.3 Super TAR tape backup software Xcircuit 3.4.26 ECE241 There are drawing programs, and there are schematic capture programs. All schematic capture programs will produce output for inclusion in publications. However, these programs have different goals, and it shows. Rarely is the output of a schematic capture program really suitable for publication; often it is not even readable, or cannot be scaled. Engineers who really want to have a useful schematic drawing of a circuit usually redraw the circuit in a general drawing program, which can be both tedious and prone to introducing new errors. NG-SPICE ECE241 SPICE 3F5 Simulation Engine KJ Waves ECE241 SPICE GUI / Front End OpenOffice 2.2 Office Productivity Application Suite
# To fix the calibre rve socket error - Eric P - this limits Calibre to one user per machine!? set MGC_CALIBRE_LAYOUT_SERVER=$HOSTNAME:91829
install centos-release-scl-rh yum install devtoolset-7-gcc devtoolset-7-gcc-c++ update-alternatives --install /usr/bin/gcc-4.9 gcc-4.9 /opt/rh/devtoolset-7/root/usr/bin/gcc 10 gcc --version update-alternatives --install /usr/bin/gcc-4.9 gcc-4.9 /opt/rh/devtoolset-7/root/usr/bin/gcc 10 gfortran --version
echo 100 > /proc/sys/fs/mqueue/msg_max echo 512 > /proc/sys/fs/mqueue/queues_max
LOCAL_CONFIG O CipherList=HIGH O CipherList=ALL:!ADH:!NULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:-LOW:+SSLv3:+TLSv1:-SSLv2:-EXP:-eNULL although some recommended the following: LOCAL_CONFIG O CipherList=HIGH O ServerSSLOptions=+SSL_OP_NO_SSLv2 +SSL_OP_NO_SSLv3 +SSL_OP_CIPHER_SERVER_PREFERENCE O ClientSSLOptions=+SSL_OP_NO_SSLv2 +SSL_OP_NO_SSLv3Test the settings with: nmap --script ssl-enum-ciphers -p 465 server_name
security = user map to guest = Bad UserRemoving Samba3 was done with "yum erase samba samba-common" although many packages were removed and not re-installed.
Into /CMC/kits/TowerSBC18/HOTCODE/amslibs/cds_default/etc/templates/rds_emxconfig.il add: EMX_interface_path = "/CMC/tools/cadence/EMX23.10.000_lnx86/share/emx/virtuoso_ui/emxinterface" ;EMX_EMX_opts=" " ; without this it fails since it's undefined EMX_path="/CMC/tools/cadence/EMX23.10.000_lnx86/bin" RDS_EMX_DEVEL_SETUP=nil ; helpful?? ; Defined this for the file: /CMC/kits/TowerSBC18/HOTCODE/techs/sbc18he5pca/emx/v1p00/sbc18he5pca_emxconfig.il EMX_combined_opts=" " EMX_ps_viewer=" " EMX_remote_machine=" "
apt install gcc-10 g++-10 cpp-10 update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 100 --slave /usr/bin/g++ g++ /usr/bin/g++-10 --slave /usr/bin/gcov gcov /usr/bin/gcov-10
yum install yum-conf-softwarecollections yum install devtoolset-8 located in /opt/rh/... and use it with: scl enable devtoolset-8 bash
sh /opt-src/Excluded/nodeSource16.x_setup.sh apt install nodejs- June 2023
apt install npm npm i wavedrom-cli -g- June 2023
apt-get install libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev openssl unpack Python 3.9.13 ./configure --enable-optimizations make altinstall pip3.9 install --upgrade pip pip3.9 install -U pip setuptools # this is critical to prevent inability to find setuptools pip3.9 install cvxopt importlib numpy matplotlib scipy pyserial paramiko perlcompat pip3.9 install fastnumbers oct2py openpyxl lxml networkx pandas seaborn progressbar2 pip3.9 install importlib graph_tools FAILS TO INSTALL: graphtool pip3.9 list test setup with: python3.9 -c 'import ssl' # was /usr/local/anaconda3/bin/python -> python3.8 ln -s /usr/local/bin/python3.9 /usr/bin/python ln -s /usr/local/bin/python3.9 /usr/bin/python3 ln -s /usr/local/bin/python3.9 /usr/local/anaconda3/bin/python ln -s /usr/local/bin/python3.9 /usr/local/anaconda3/bin/python3July 2022
git clone https://gem5.googlesource.com/public/gem5 cd gem5/ scons build/X86/gem5.opt -j 12 cp -r ~/gem5/gem5/build/ /opt/gem5/ scons build/ARM/gem5.opt -j 12 cp -a build/ /opt-src/gem5/ /opt/gem5/
export CXX=g++ mkdir build cd build ../configure --prefix=/opt/systemc-2.3.2 make make check make installand then copy from /opt/systemc-2.3.2 into /opt-src for all other machines - March 2021
ln -s /opt/vim-complete-me ~/.vim cp /opt/vim-complete-me/vimrc ~/.vimrcUpdating the vimrc file to point to /opt/vim-complete-me did not work. Each user could install it but it's ~700M - June 2019
python3 import tensorflow as tf
wget https://sourceforge.net/projects/libpng/files/libpng12/1.2.59/libpng-1.2.59.tar.gz/download mv download libpng1.2.59.tar.gz gunzip libpng1.2.59.tar.gz tar -xf libpng1.2.59.tar ./configure make sudo make install sudo mv /usr/local/lib/libpng* /usr/lib
sudo mv /var/lib/dpkg/info/udev.postinst /var/lib/dpkg/info/udev.postinst.backup sudo apt-get install -f sudo mv /var/lib/dpkg/info/udev.postinst.backup /var/lib/dpkg/info/udev.postinst sudo apt-get update sudo apt-get upgrade -y
sudo mv /var/lib/dpkg/info/udev.postinst /var/lib/dpkg/info/udev.postinst.backup sudo apt-get install -f sudo mv /var/lib/dpkg/info/udev.postinst.backup /var/lib/dpkg/info/udev.postinst sudo apt-get update sudo apt-get upgrade -y
The libraries have been moved into /opt-src/questa105/modeltech/lib and LD_LIBRARY_PATH is modified in opt-src/CMC/local/maagaard/bin/vsim. Below is the hint at https://askubuntu.com/questions/602725/trouble-running-modelsim-on-ubuntu but it's not quite what I did.
Built: /homeOLD/praetzel/FreeType-Old-Build/freetype-2.4.12/include/freetype ** Fatal: Read failure in vlm process (0,0) Segmentation fault (core dumped) You probably need to build a new version of freetype, a font setting library and modify ModelSim to use it. For an unknown reason ModelSim has an issue with modern versions shipping in Arch and Ubuntu 14.04. First download the source code of freetype 2.4.12: http://download.savannah.gnu.org/releases/freetype/freetype-2.4.12.tar.bz2 Now install the build dependencies needed for libfreetype6, extract the source (using tar) and configure and build libfreetype: sudo apt-get build-dep -a i386 libfreetype6 tar -xjvf freetype-2.4.12.tar.bz2 cd freetype-2.4.12 ./configure --build=i686-pc-linux-gnu "CFLAGS=-m32" "CXXFLAGS=-m32" "LDFLAGS=-m32" make -j8 The finished libraries are now available inside the objs/.libs directory. As they are necessary to run ModelSim we need to copy them into the install directory so they don't get lost and then modify ModelSim's vsim script to use the new libraries instead of the system wide versions. Change directory to the directory where you installed ModelSim, /opt/altera/13.1/modelsim_ase/, on my system. Note you may need to edit the directory paths to match those used on your system. sudo mkdir lib32 sudo cp ~/Downloads/freetype-2.4.12/objs/.libs/libfreetype.so* ./lib32 Now we need to edit the vsim launch script to ensure the new freetype libraries are used: sudo vim bin/vsim Search for the following line: dir=`dirname $arg0` and underneath add the following new line: export LD_LIBRARY_PATH=${dir}/lib32
sudo snap install canonical-livepatch sudo canonical-livepatch enable xxxxxxxKEYxxxxxxx
source /CMC/local/tools/cadence/IC617/setenv.csh source /CMC/local/tools/cadence/SPECTRE171/setenv.csh source /CMC/local/tools/cadence/PVS152/setenv.csh
if (! $?PATH) then setenv PATH "" endif set new="/opt/texlive/2017/bin/x86_64-linux" switch ($PATH) case "${new}:*": case "*:${new}:*": case "*:${new}": case "${new}": breaksw case "": setenv PATH "${new}" breaksw case "*": setenv PATH "${PATH}:${new}" breaksw endsw unset newand for /etc/profile.d/texlive.sh
new="/opt/texlive/2017/bin/x86_64-linux" case "${PATH}" in ${new}:*|*:${new}:*|*:${new}|${new}) ;; "") PATH="${new}" ;; *) PATH="${PATH}:${new}" ;; esac unset new export PATH
Setup x11 forwarding for ECE machines by appending the following to /etc/ssh/ssh_config Host ece* ForwardX11 yes To boot to console: sudo systemctl set-default multi-user.target sudo systemctl set-default multi-user.target Install Cuda 9.0 - NOT NOT 9.1 Install NVidia driver 384.111 NOT NOT 387 sudo dpkg -i /opt-src/Excluded/GPU/Drivers-cuda/cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb sudo apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub sudo apt-get update sudo apt-get install cuda # install the driver - two parts - first remove noveau then run it again sudo sh /opt-src/Excluded/GPU/Drivers-cuda/NVIDIA-Linux-x86_64-390.25.run sudo update-initramfs -u sudo shutdown -r now sudo sh /opt-src/Excluded/GPU/Drivers-cuda/NVIDIA-Linux-x86_64-390.25.run # to get a c99 program and caffe hdf5_hl to run: sudo ln -s /usr/lib/x86_64-linux-gnu /usr/lib64 sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler sudo apt-get install --no-install-recommends libboost-all-dev sudo apt-get install libatlas-base-dev sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev sudo -H pip install --upgrade pip sudo -H pip3 install --upgrade pip sudo -H pip3 install numpy pyserial git clone https://github.com/BVLC/caffe/ go into caffe cp Makefile.config.example Makefile.config # Adjust Makefile.config (for example, if using Anaconda Python, or if cuDNN is desired) remove _20 and _21 from processors Update : INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/ sudo apt install nvidia-cuda-toolkit had to patch the LIBRARY to get things to work for hdf5 Update : LIBRARY_DIRS := ... /usr/lib64/hdf5/serial/ make all -j 4 (for 4 CPU cores to parallel the compile) make test make runtest Distribution: run make distribute to create a distribute directory with all the Caffe headers, compiled libraries, binaries, etc. needed for distribution to other machines. make distribute praetzel2@eceTesla00:~/caffe$ make distribute CXX/LD -o python/caffe/_caffe.so python/caffe/_caffe.cpp /usr/bin/ld: cannot find -lboost_python3 collect2: error: ld returned 1 exit status Makefile:507: recipe for target 'python/caffe/_caffe.so' failed make: *** [python/caffe/_caffe.so] Error 1 *************************************************************************** to fix that update Makefile.config and change: PYTHON_LIBRARIES := boost_python python3.5m # PYTHON_LIBRARIES := boost_python3 python3.5m
I ran ./examples/mnist/train_lenet.sh with the following. Login and create a directory and copy the require example files: mkdir caffe-testing cd caffe-testing cp -a /opt/caffe/examples/ . cp -a /opt/caffe/build/ . cp -a /opt/caffe/data/ . ./data/mnist/get_mnist.sh ./examples/mnist/create_mnist.sh ./examples/mnist/train_lenet.sh
git clone https://github.com/groeck/lm-sensors go into it and make and make install - also modprobe nct6775 to be sure the driver is in - possibly bash it to get the driver at boot time The check-gpu script needs the following: echo "junk" > /tmp/mon/cpu-status.txt /usr/bin/sensors | /bin/awk '{if ($1=="AUXTIN0:") {print " Physical id 0: ", $2, $0} else {next} }' >> /tmp/mon/cpu-status.txt /usr/bin/sensors | /bin/awk '{if ($1=="SYSTIN:") {print " temp1: ", $2, $0} else {next} }' >> /tmp/mon/cpu-status.txt
Testing: N queens problem is CPU only - serial version and parallel nqueens_omp - takes one Int and do not go over 17 due to overflow binaries need to be run from their directory because path to kernel is hard coded within nbody-opencl.c is not used as C bindings are not supporteddisable UEFI boot for the Nvidia drivers - otherwise they don't install download CUDA 9.0 as 9.1 does not work with Linux (GPU drivers) sudo dpkg -i cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb [or equivalent RPM for CentOS] sudo apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub sudo apt-get update sudo apt-get install cuda software should be in /usr/local/cuda and /var/nvidia* /var/cuda* lsmod | grep nvidia to check for the existance of drivers installed also /proc/driver/nvidia/gpus /etc/OpenCL/vendors/nvidia.icd is setup for linking in /etc/ld.so.conf.d/ openCL-headers needs to be installed ocl-icd-devel, ocl-icd and pdflatex + xclip are needed Monitor with: nvidia-debugdump and nvidia-smi -q and don't forget lshw and lm-sensors
disable secure boot in BIOS or the drivers will not load lsmod | grep nvidia to check for kernel modules nouveau should be uninstalled check if the driver is running by looking in /proc/driver/nvidia/gpus On CentOS 7 gcc libs are too old (need GLIBCXX_3.4.21) so use /opt/lib64 in the Makefile include Solaris Studio should be installed in /opt/oracle/solarisstudio12.3/bin/cc even if we're running a newer version After CUDA and driver install check /usr/local/cuda, /var/nvidia* /var/cuda* nvidia-smi lists running proceses and GPU info. nvidia-smi -q lists a lot more info and then users can kill their own processes with kill -9 PID where PID is the process id given by nvidia-smi
mkdir BUILD cmake ../llvm-svn edit cmake_install.cmake #Set the install prefix CMAKE_INSTALL_PREFIX=/home/ssingh/opt/llvm-3.0rc4 ??? cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release /opt-src/Excluded/Software-Source/llvm-3.0/llvm-3.0.src create temp directory build-it to build binary in cmake build-it make cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release /opt-src/Excluded/Software-Source/llvm-3.0/clang-3.0.src/build-it -DCLANG_PATH_TO_LLVM_BUILD=/opt/llvm-3.0rc4/
sudo -H pip3 install cntk--cp35... Software in /opt-src/Excluded/Python-cntk
1) Install libraries to cross-compile 32-bit code in 64-bit machines, as86 tool, and qemu emulator: sudo apt install lib32ncurses5 lib32z1 lib32stdc++6 libc6-i386 libc6-dev-i386 gcc-multilib g++-multilib bin86 qemu 2) Download the gcc from the website: https://drive.google.com/open?id=1hPx2qAHJMe-aCZ4oOc8is0tQXRelcM06 The standard EPOS gcc is inside the /usr/local/ia32 directory. First create the directory: sudo mkdir /usr/local/ia32 Uncompress the gcc file you downloaded and move it to the /usr/local/ia32/ directory: sudo mv gcc-4.4.4/ /usr/local/ia32/
ASM http://download.forge.objectweb.org/asm/asm-4.0.jar JUnit 4.10 from https://sourceforge.net/projects/junit/ JUnit 5.0.2 into /opt/jars/junit5-r5.0.2/ from https://github.com/junit-team/junit5/releases/tag/r5.0.2
yum install gperf sh autoconf.sh ./configure --prefix=/opt-src/verilog
wget https://www.virtualbox.org/download/oracle_vbox.asc rpm --import oracle_vbox.asc vi /etc/yum.repos.d/oracle.repo [and put in the following] [virtualbox] name=Oracle Linux / RHEL / CentOS-$releasever / $basearch - VirtualBox baseurl=http://download.virtualbox.org/virtualbox/rpm/el/$releasever/$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://www.virtualbox.org/download/oracle_vbox.asc yum install virtualbox
git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git export PATH=$PATH:`pwd`/depot_tools mkdir dart-sdk cd dart-sdk gclient config https://github.com/dart-lang/sdk.git gclient sync cd sdk ./tools/build.py --mode release --arch x64 create_sdk mkdir /opt-src/Dart mv out/* /opt-src/Dart/
about line 312: // if (setjmp(png_ptr->jmpbuf)) { if (setjmp(png_jmpbuf(png_ptr))) { about line 437: // free(info_ptr->palette); png_free_data(png_ptr, info_ptr, PNG_FREE_PLTE, -1);
thinlinc-tlmisc-4.4.0-4775.x86_64.rpm thinlinc-tlmisc-libs-4.4.0-4775.x86_64.rpm thinlinc-tlmisc-libs32-4.4.0-4775.i686.rpm thinlinc-vsm-4.4.0-4775.x86_64.rpm thinlinc-vnc-server-4.4.0-4775.x86_64.rpm thinlinc-rdesktop-4.4.0-4775.x86_64.rpm thinlinc-tladm-4.4.0-4775.x86_64.rpm thinlinc-tlprinter-4.4.0-4775.noarch.rpm thinlinc-webaccess-4.4.0-4775.noarch.rpm
mysql-community-bench-5.7.5-0.6.m15.el6.x86_64.rpm mysql-community-embedded-devel-5.7.5-0.6.m15.el6.x86_64.rpm mysql-community-client-5.7.5-0.6.m15.el6.x86_64.rpm mysql-community-libs-5.7.5-0.6.m15.el6.x86_64.rpm mysql-community-common-5.7.5-0.6.m15.el6.x86_64.rpm mysql-community-libs-compat-5.7.5-0.6.m15.el6.x86_64.rpm mysql-community-devel-5.7.5-0.6.m15.el6.x86_64.rpm mysql-community-server-5.7.5-0.6.m15.el6.x86_64.rpm mysql-community-embedded-5.7.5-0.6.m15.el6.x86_64.rpm mysql-community-test-5.7.5-0.6.m15.el6.x86_64.rpmSetup password: mysqladmin -u root password NEWPASSWORD
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- bcmrpi_defconfig make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- Add -j n where n is number of processors * 1.5The sources were from:
http://www.raspberrypi.org/documentation/linux/kernel/building.md git clone --depth=1 https://github.com/raspberrypi/linux git clone https://github.com/raspberrypi/toolsNov 18, 2014
ModelSim 10.3 via CMC license into /opt on all machines
rpm --erase ruby rrdtool perl-rrdtool (also enable DAG repository) curl -sSL https://get.rvm.io | bash -s stable source /etc/profile.d/rvm.sh yum install libffi-devel-3.0.9-1.el5.rf.x86_64
svn co http://llvm.org/svn/llvm-project/llvm/tags/RELEASE_30/rc4 llvm-svn cd llvm-svn/ cd tools/ svn co http://llvm.org/svn/llvm-project/cfe/tags/RELEASE_30/rc4 clang ./configure --enable-optimized make make install
export CC="gcc44" export CXX="g++44"
This method allows each member to view and edit files simultaneously in the same environment. The software required: - UW VPN - Microsoft VS Code - VS Code Remote-SSH extension - VS Code Live Share Extension - ThinLinc or MobaXTerm The host needs UW VPN access. They should set up an ssh session to eceubuntu.uwaterloo.ca in VS Code SSH-Remote, and then setup a liveshare session for themselves and the lab partners. After joining the liveshare, each partner will be able to temporarily access and edit the files the host has open in VS Code. The host will also need a separate terminal connection in ThinLinc or MobaXTerm to run the simulations as VS Code terminal does not support GUIs. This screen can be shared with other members using Webex.
curl -s https://duo.com/DUO-GPG-PUBLIC-KEY.asc | sudo apt-key add - Add this to /etc/apt/sources.list.d/duosecurity.list deb [arch=amd64] https://pkg.duosecurity.com/Ubuntu focal main apt update apt install duo-unix edit /etc/duo/pam_duo.conf and add ikey, skey and hostConfig files
https://help.duo.com/s/article/5085?language=en_US /etc/ssh/sshd_config PasswordAuthentication no KerberosAuthentication no ChallengeResponseAuthentication yes AuthenticationMethods keyboard-interactive UsePAM yes /etc/pam.d/sshd already uses common-auth so no changes are needed /etc/pam.d/common-auth - keep the ordering of pam_krb5 and pam_unix and add pam_duo auth [success=2 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=1 default=ignore] pam_unix.so nullok_secure try_first_pass auth requisite pam_unix.so nullok_secure try_first_pass auth sufficient /lib64/security/pam_duo.so
ThinLinc requires that an OTP can be used twice due to how ThinLinc client first connects and authenticates to the master server and then reconnects and authenticates to the agent server. https://www.cendio.com/thinlinc/docs/tutorial/otp
eceSVN uses: subversion-1.6.11-15.el6_7.x86_64 [redhat stock for RHEL / CentOS 6] svnclient Ubuntu : 1.9.7 svnclient eceLinux1/2/3 : 1.7.14 svnclient eceLinux4: 1.7.4 [it uses a more updated repo than CentOS 6]
msg.exe %USERNAME% "%USERNAME% - a gentle reminder, 'food (eating) or refreshments (drinking) are not allowed in the Engineering Labs'." goto end :end
Note - to find the uninstaller look under the registry in: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall
waitfor32.exe is on the Tuque share for adding a time-out for installs
Ie waitfor32.exe -f200 sets a 200 minute timeout
"C:\Program Files\Common Files\Microsoft Shared\ClickToRun\OfficeC2RClient.exe" /update user updatepromptuser=false forceappshutdown=true displaylevel=false Or script it with psexec to (it will pop-up a message to a logged in user upon completion of the update): psexec @computers.txt -d -n 30 cmd /c "C:\Program Files\Common Files\Microsoft Shared\ClickToRun\OfficeC2RClient.exe" /update user updatepromptuser=false forceappshutdown=true displaylevel=false
START /WAIT vc_redist.x64.exe /install /quiet /norestart /log vc2015update3x86.log START /WAIT vc_redist.x86.exe /install /quiet /norestart /log vc2015update3x64.log START /WAIT GFortran42\setup.exe /s /f1"C:\nexus\install\work\fortran42.iss" /f2"C:\temp\fortran42.log" START /WAIT GFortran46\setup.exe /s /f1"C:\nexus\install\work\fortran46.iss" /f2"C:\temp\fortran46.log" REM If PSCad setup is run first it has popups to install GFortran START /WAIT setup.exe /s /f1"C:\nexus\install\work\pscad4.iss" /f2"C:\temp\pscad46.log"
ccs_setup_8xxxxx.exe --save-response-file c:\temp\response.txt --skip-install true To use the response file: START /WAIT ccs_setup_8.2.0.00007.exe --response-file c:\nexus\install\work\response.txt --mode unattended --prefix C:\Software\ccs
[HKEY_LOCAL_MACHINE\SOFTWARE\Keysight\ADS\4.30\eeenv] "HOME"="N:\\"
Q:\eng\ece\util\eclipse.bat is a batch file which: 1) Appends C:\Program Files\Java\jdk1.7.0_10\bin to the path 2) Run C:\Software\adt-bundle-windows\eclipse\eclipse.exe
Java WSDP 1.6 Bundle http://java.sun.com/webservices/download/webservicespack.html Application Server http://java.sun.com/j2ee/1.4/download.html#sdk Tomcat for WSDP (needed if you are running the Service Registry locally) http://java.sun.com/webservices/containers/tomcat_for_JWSDP_1_5.html
May 2012
The following are performance tests using Samsung Magician software and Samsung EVO 840 and 850 250G SSDs with an Asus P8H77-M motherboard, 8G of RAM
Being tested was different BIOS settings: IDE or AHCI configuration and 3GB/s or 6GB/s SATA connection.
In Win 7 it's critical that two services be set to auto-start at boot time or changing from IDE to AHCI will
bluescreen the machine. Change the start variable from the default of 3 to 0 for these two keys:
HKLM\System\CurrentControlSet\Service\msahci
HKLM\System\CurrentControlSet\Service\iastorV
Windows 7 performance tests, with a HD, were done with a 2012 vintage 500G WD hard drive. Linux tests, with a HD, were doing using a 2008 vintage 160G WD drive.
SSD Model, Computer | Conditions | Samsung SSD Magician Perf. Test | Windows 7 HD Perf Test | ||||
---|---|---|---|---|---|---|---|
Sequential | IOPS | ||||||
Speed | IDE or AHCI | Read | Write | Read | Write | ||
Samsung EVO 850 250G, testbed | 3GB/sec | IDE | 212 | 236 | 8551 | 13712 | - |
6GB/sec | AHCI | 438 | 486 | 63898 | 62946 | - | |
WD 500G 7200rpm, testbed | 3GB/sec | IDE | 98 | 100 | 208 | 410 | 5.9 |
6GB/sec | AHCI | 109 | 105 | 448 | 397 | 5.9 | |
EVO 840, public06 | 6GB/sec | AHCI | 550 | 241 | 62207 | ** 4019 ** | 7.9 |
EVO 850, public10 | 3GB/sec | IDE | 202 | 226 | 8276 | 13605 | 7.9 |
6GB/sec | AHCI | 436 | 487 | 62040 | 62074 | ||
EVO 850, testbed | 3GB/sec, Win updates installing | IDE | 208 | 222 | 6398 | 13511 | |
3GB/sec | IDE | 211 | 227 | 7903 | 13626 | 7.3 | |
6GB/sec | IDE | 309 | 371 | 8943 | 16635 | 7.7 | |
3GB/sec | AHCI | 284 | 269 | 50446 | 43355 | ||
6GB/sec | AHCI | 550 | 469 | 60348 | 65448 | 7.9 | |
WD 500G HD Testbed & lab computers | 3GB/sec | IDE | 5.9 | ||||
6GB/sec | IDE | 5.9 | |||||
6GB/sec | AHCI | 5.9 |
Bonnie++ 1.03 test results for Linux using a 2008 vintage 160G WD HD first at 3GB/sec with IDE and the 2nd at 6GB/sec with AHCI
Server | Setup | Size | Sequential Output | Sequential Input | Random Seeks | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Per Char K/sec | %CP | Per Block K/sec | %CP | Rewrite K/Sec | %CP | Per Chr K/sec | %CP | Block | %CP | Random Seeks /sec | %CP | Files | |||||
Linux Testbed 2008 vintage 160G WD HD | IDE, 3GBs | 15720M | 99215 | 82 | 103960 | 9 | 47025 | 4 | 98752 | 90 | 128712 | 7 | 203.7 | 0 | 16 | ||
AHCI, 6GBs | 15720M | 105761 | 87 | 103348 | 8 | 46492 | 4 | 92428 | 83 | 118931 | 4 | 192.0 | 0 | 16 | |||
Linux Testbed 250G EVO 850 SSD IMPROVED | IDE, 3GBs | 15720M | 115854 | 97 | 250008 | 23 | 101661 | 11 | 110786 | 99 | 243710 | 15 | 9787.4 | 30 | 16 | ||
AHCI, 6GBs | 15720M | 113149 | 95 | 308149 | 26 | 182605 | 15 | 113467 | 99 | 656404 | 30 | +++++ | +++ | 16 | |||
Linux eceSVN DDR2, M3A785T-M, 2.5GHz 160G WD HD | IDE, 3GBs | 15G | 72777 | 98 | 109252 | 43 | 44443 | 21 | 45691 | 92 | 136898 | 25 | 255.9 | 1 | 16 | ||
AHCI, 3GBs | 15G | 70148 | 95 | 108934 | 40 | 43686 | 20 | 43949 | 92 | 138138 | 25 | 280.0 | 1 | 16 | |||
Linux testBed, 1TB WD 10k rpm Velociraptor | IDE, 3GBs | 15720M | 108624 | 90 | 196638 | 17 | 91043 | 10 | 104320 | 94 | 232484 | 14 | 413.4 | 2 | 16 | ||
AHCI, 6GBs | 15720M | 113884 | 95 | 195132 | 17 | 84157 | 8 | 105111 | 93 | 215252 | 10 | 586.9 | 1 | 16 | |||
eceUbuntu-opt-directory | IDE, 3GBs | 47360M | 91171 | 99 | 119559 | 22 | 68295 | 14 | 91713 | 98 | 262224 | 31 | 470.2 | 2 | 16 | 25134 | 80 |
hdparm -c1 (32-bit xfer) -d1 (DMA) enabled unless otherwise stated.
Speed tests are read tests hdparm -t and -T in square brackets []
For the university http://ego.uwaterloo.ca First - use the 3nd link to set it to use your Polaris/Nexus password using the EngMail server. http://ego/~uwdir/UW-SignOn.html Then click on the link to "Update Your UWdir Data". https://ego/~uwdir/Update
2004 inventory
It's that time of the century - Win XP is dead and one option is to jump older desktops and laptops to Linux!
Here is a comparison of some : Ubuntu flavours
Test drive it by using a bootable USB key using the: Universal USB Installer
This is for a 3.4GHz AMD 2-core with 4G of RAM on an Asus M4A785T-M motherboard with 500G HD done March 2014.
Operating Sys. | Boot Time Sec | Total Boot to Login Sec | Hibernate Sec | Restore Sec |
---|---|---|---|---|
Windows XP Pro x86 | 44 | 72 | 20 | 17 |
XUbuntu 13.04 | - | 46 | <5 | <5 |
19 June 2018 - CentOS 6 file server had to enable SMB 2 so that eceServ1 would work with Chorus machines (Win 10 1709) by editing smb.conf and adding: max protocol = SMB2 for Samba 4.6
Nov 2017 - CentOS 7 eceWeb & eceAdmin can't send email to @gmail but on-campus works.
yum install postfix rpm --erase ssmtp chkconfig postfix on and service start postfix
Joining machines to domains. Assuming the local passwd file is used for UID/GID. Edit the smb.conf and:
https://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/idmapper.html eceWeb - edit samba.conf security = ADS realm=nexus.uwaterloo.ca net ads join -U'__Domain_Account___' Enter __Domain_Account___'s password: Using short domain name -- NEXUS Joined 'ECEWEB' to dns domain 'NEXUS.UWATERLOO.CA' DNS Update for eceweb.uwaterloo.ca failed: ERROR_DNS_GSS_ERROR DNS update failed! [root ~]# net ads testjoin Join is OK
Here is a simple script to track recent snapshots on FreeNAS or ZFS. I just cron it daily so it sends me an email and if I don't see the snapshots or see 0 size for the previous day something is wrong ...
YESTERDAY=`date +%Y%m%d --date yesterday` TODAY=`date +%Y%m%d` echo Yesterday was $YESTERDAY and today is $TODAY ssh _server_ zfs list -t snapshot | grep $YESTERDAY ssh _server_ zfs list -t snapshot | grep $TODAY
HDParam tests (cache Read, drive read) -c1 -d1
Unless otherwise stated all UPS's are APC Smart UPS
14-Mar-2019 - reboot eceTesla3 - GPU driver became unresponsive, process issues
14-Mar-2019 - reboot eceTesla2 - load 300, nvidia-smi was unresponsive
~7-Mar-2019 - reboot eceTesla0 - process spawning issues, kernel upate
Nov 19 - e2-sw-2403a-a lost it's uplink, spanning tree issue
Sept 17 - e2-sw-2403a-a wasn't working for ~20 ports - no link lights at all, re-seated management board
May 4 - e2-sw-2403a-a installed
DC POP down (it's Cisco?) - Sept 28 to 31
2010? - APC 3000VA UPS blew out it's electronics seconds after a power outage.
Apr 2009 - CPH-1333A battery failure resulted in UPS continually power cycling.
Mar 2009 - severe battery failure (puffing, venting, hot) on 1400VA in E2-2361 on Unix rack
Dec 2008 - Major failure of 3000VA UPS in E2-2361 on Unix rack
2006? - minor battery failure on 1400VA UPS in E2-3340 resulting in UPS turning off and not failing onto mains
Batteries only seem to last 2 to 3 years. This is signif. shorter than
the 20+ years I was used to at a commerical UPS mfg. Visually batteries
indicate overcharing (puffing, cracking) and some failures involve the
battery going high-Z.
Measuring voltage on a pair of 3000VA APCs UPS the battery to a battery
pair was 26.2V and 27.4V so overvoltage charging could only happen due to
a difference in battery capacity and/or impediance- resulting in the lower
capacity/higher-Z battery being overcharged.
Replacement Batteries
WOL works with Wake-up on Windows with M2N, M3 Asus AMD MBs. ether-wake on Linux (net-tools rpm)
Websites with useful info:
Good backgrounder: http://www.dslreports.com/faq/wol?text=1
Linux source code: http://ahh.sourceforge.net/wol/
Great site listing tons of packages to use:
http://gsd.di.uminho.pt/jpo/software/wakeonlan/mini-howto/wol-mini-howto-3.html
Magic UDP packet port 9 (almost all)
AOpen M/B supports WOL with PCI card using connector
ASUS P3 require PCI eth nic support WOL - none of ours do
P4P-800VM apparently supports WOL but I've not gotten it to work.
In 2018 performance issues with eceSVN were raised. Here are some performance results using
svn co http://ecesvn[-new].uwaterloo.ca/courses/ece108
The virtual machine servers have Intel i5-6500 CPUs with a 10Gb/s connection back to the file server holding the VM and 10Gb/s back to the file server with the SVN repos and was running 4 VMs, each with a single core assigned.
My guess for the poor performance of the 2-core VM is that the KVM server is running CentOS and I've seen 2x worse performance on CentOS than Ubuntu when many cores / threads are in use. It seems as if the older kernels in CentOS do not handle new CPUs and many cores well.
KVM Server | Server | OS | Specs | Runtime |
---|---|---|---|---|
N.A. | eceSVN | CentOS 6 | 32G RAM, 4-core i5-3300 | 46 min, 44 min |
CentOS 7 10Gb/s | eceSVN-new | Ubuntu 18.04 LTS | 1-core of i5-6500, 4G (swap in use) | 15 min |
1-core of i5-6500, 8G | 10 min | |||
2-core of i5-6500, 8G | 25, 25, 11:15 min | |||
3-core of i5-6500, 8G | 12 min | |||
Ubuntu 18.04 1Gb/s | 1-core of i7-8700, 8G | 9 min | ||
2-core of i7-8700, 8G | 9:15 min | |||
3-core of i7-8700, 8G | 10 min |
Nov 18, 2014 - This was a test to evaluate the space and computing requirements to build Raspbian for the Raspberry PI.
Kernel code is 837M clean and 1.2G when built. ECELinux5 is running mprime in the background niced.
source /opt/bin/setup-raspbian.csh
make clean; make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- bcmrpi_defconfig
date ; make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- -j 6 ; date
Some rough numbers for performance comparison:
Server, Specs | Location | Build Ops | Build Time |
---|---|---|---|
Linux5, x4 i5 | tmp | -j 6 | 6:33 : 9:58:16 10:04:49 |
Linux5, x4 i5 | home SSD RAID 10 | -j 6 | 9:27 10:25 |
Linux5, x4 i5 | home SAS RAID 10 | -j 6 | 12:27 : 11:27:34 11:40:01 |
Linux5, x4 i5, 1SSD on server, sustains 14Mb/sec to file server | home | -j 6 | 11:45 : 7:48:57 8:00:42 |
Linux5, x4 i5, 1SSD on server | home | -j 6 | 13:10 : 7:32:33 7:45:42 |
Linux1, x8 AMD FX | tmp | -j 12 | 6:59 : 10:07:30 10:14:29 |
Linux1, x8 AMD FX | tmp | -j 6 | 7:01 : 10:46:06 10:53:07 |
Linux1, x8 AMD FX | home, 6 x SAS | -j 12 | 9:37 : 11:11:50 11:21:27 |
Linux2, x8 AMD FX | home RAID 10 SSD | -j 12 | 8:01 6:48 7:25 |
Linux1, x8 AMD FX, 20 Mb/sec to file server | home, 5 x SAS, 1 SSD | -j 12 | 9:36 : 8:05:37 8:15:13 |
Linux6, x2 AMD "550" | tmp | -j 3 | 16:57 : 9:31:40 9:48:37 |
Linux5, x4 i5 | home | -j 323:57 : 6:18:20 6:41:17 | |
Linux5, x4 i5 | home | 67:12 : 11:44:38 12:51:50 | |
Linux1, x8 AMD FX | home, 6 x SAS | 2:13:15 : 12:59:55 14:13:10 | |
Raspberry PI A, 256M, 700MHz | SD card | 16:15:00 : ~1pm 5:15:14am |
Using i3-8100 with 1x16G 2400MHz RAM, M.2 500G 860 EVO, ROG Strix H370-I Gaming MB:
Power Supply | BIOS power | Win 10 power |
---|---|---|
Antec ISK 110 P/S | 29.1W | 14.2W |
Antec MT 352 80Plus Bronze | 33.8W | 19.0W |
This is to collect the performance info to inform purchasing decisions. Power draw and run times are for the ECE459 nbody assignment with 5,000 * 64 points.
Model | Perf | Max. Power | Idle & compute Power | nbody min:sec | Cost |
---|---|---|---|---|---|
Quadro 600 GF108 | ?GF/s | ?W | ? | 1G GDDR3, 128-bit bus, 96 cuda cores | |
GTX 950 | 1.6GF/s | 90W | 5:00 | ? | |
Tesla M2090 | 1.3GF/s | 225W | 74W / 109W | 3:15 | $200 used Jan 2018 |
GTX 1060 | 3.9GF/s | 120W | 25W / 50W | 2:18, 2:16 | $300+ |
Tesla P4 | 5GF/s | 70W | 24W / 33W | 1:50 | est. $2,500 |
GTX 1070 | 5.8GF/s | 150W | 20W / 75W | 1:35 | $300 used, $600+ |
RTX 2070 | -GF/s | 215W | ? / 90W | 1:21, 1:21 | $ ? |
GTX 1080 | 8.2GF/s | 180W | 1:08 | $750+ | |
RTX 3070 | ?GF/s | ?W (runs at 82W) | 61, 61, 60.9 | ~$800 | |
RTX 2080 | -GF/s | 250W | 38 / 129W | 54.7, 58.5, 56.3, 57.0, 59.4 | $ ? |
Titan XP | 11GF/s | 250W | 0:45 | $1,200 |
This test runs the ECE 459 nbody test from Assignment 2 with 5,000 * 64 points. Run time on various GPUs is below
GPU | run time | Misc. Specs |
---|---|---|
Nvidia Quadro 600 GF108 (runs at 76C) | 28:24 | Ubuntu 18.04, i3-6100 |
Nvidia GTX 950 | 5:06, 5:02, 5:01 | Ubuntu 16.04 or CentOS 7.4, Ryzen7 1700 |
Nvidia Tesla M2090 | 3:16, 3:13 | Ubuntu 16.04, i3-4170 |
3:14, 3:13 | Ubuntu 18.04, i3-6100 | |
Nvidia GTX 1060-6G | 2:16, 2:17, 2:17 | Ubuntu 18.04, NVidia driver 390.87 |
2:18, 2:18 | Ubuntu 18.04, NVidia driver 415.13 | |
Nvidia Tesla P4 | 1:57, 1:50, 1:55 | CentOS 7.4, Xeon Gold 5120 14-core |
Gigabyte GTX 1070 | 1:34, 1:35, 1:36 | Ubuntu 18.04 LTS, i7-7700K |
GeForce GTX 1070 FTW | 1:30, 1:31, 1:31 | Ubuntu 18.04 LTS |
PNY XLR8 GTX 1070 FTW | 1:27, 1:27, 1:27 | Ubuntu 18.04 LTS |
Nvidia GTX 1080 | 1:08, 1:09, 1:08 | Ubuntu 17.10 |
1:09, 1:08 | Ubuntu 18.04 | |
Nvidia Titan XP | 0:48, 0:48, 0:47 | Ubuntu 17.10, i7-8700K 6-core, DUAL GPU only 1 used |
0:47, 0:47, 0:44 | Ubuntu 16.04, i7-8700K 6-core, DUAL GPU only 1 used | |
RTX 3070 | 61.4, 61.1 | Ubuntu 20.04 LTS - Feb 2022 |
RTX 3090 | 39.6, 39.9, 37.2 | Ubuntu 18.04 LTS - Nov 2021 |
This test runs the caffe mnist example. Run time on various GPUs is below
GPU | run time | Misc. Specs |
---|---|---|
Titan Xp | 1:19 | Ubuntu 16.04, Dual GPU in machine, only one used |
1:42, 1:43, 1:43 | Ubuntu 17.10, Dual GPU in machine, only one used | |
GTX 1080 | 1:10, 1:09 | Ubuntu 18.04 |
1:37, 1:39 | Ubuntu 17.10 | |
GTX 1060-6G | 1:14, 1:16 | Ubuntu 18.04 |
GTX 950 | 1:54 | Ubuntu 16.04 |
2:33 | Ubuntu 17.10 | |
1:51, 1:51 | Ubuntu 18.04 |
Below are the runtimes for the A-3.py file from Chris_Warren's Masters thesis (python3 A-3.py) The OS is Ubuntu 18.04 unless otherwise stated.
CPU | SpecMark | Runtime (sec) | |
---|---|---|---|
single/multi | Part 1 | part 2 | |
i7-7700k : June 2019 | 2585 / 12055 | 109 | 6.0 |
i7-7700k | 109 | 6.3 | |
i7-8700 CentOS 7 : June 2019 | 2705 / 15995 | 110 | 5.2 |
Ryzen 7-1700 : June 2019 | 1775 / 13750 | 113 | 9.4 |
Ryzen 7-1700 | 130 | - | |
i7-6700 | 2156 / 10011 | 125 | 6.7 |
i7-6700 : June 2019 | 124 | 6.8 | |
i5-6500 | 1945 / 7235 | 123 | 7.2 |
i5-6500 : June 2019 | 126 | 7.5 | |
i7-3770 | 2068 / 9284 | 153 | 13.4 |
i7-3770 : June 2019 | 154 | 13.1 | |
Xeon Gold 5120 : June 2019 | 1725 / 18145 | 168 | 7.9 |
Xeon Gold 5120 | 163 | 8.0 |
For this test the queens problem makes use of all CPU threads. I did not test de-activating hyperthreading.
For Ubuntu 17.10 the program was recompiled and may explain the faster runtimes.
CPU | run time | Specs | Passmark Test (single / all threads) |
---|---|---|---|
DUAL AMD EPYC 7H12 | 0:37, 0:37, 0:36 (3-2021) | 2 x 64-cores, 2.6GHz, Boost 3.3GHz, Ubuntu 20.04 LTS LOAD IS ONLY 12 !! | EPYC 7763 2.5GHz/3.5GHz 64-core is 2639 / 87,686 |
AMD Threadripper 3960X 24-core | 0:23.5, 0:23.6, 0:23.5 | 24-cores, not all cores loaded | - / - |
AMD Ryzen 9 5900X | 0:28, 0:27 | 12-cores, 3.7GHz, Boost 4.8GHz, Ubuntu 20.04 LTS | 3502 / 39498 |
AMD Ryzen 7 5700G | 0:35, 0:36 | 8-cores, 3.8GHz, Boost 4.6GHz, Ubuntu 22.04 LTS | - / - |
AMD EPYC 7302P | ? | 16-cores, 3.3GHz, Ubuntu 18.04 LTS | 2248 / 30994 |
AMD Ryzen 2700X | 0:43, 0:42, 0:43 | 8-cores, 3.7GHz, Ubuntu 18.04 LTS | 2193 / 16985 |
Xeon Gold 5120 | 0:42, 0:43, 0:42 (0:61 3-2021) | 14-cores, 2.2GHz,CPU ~$3000, Ubuntu 18.04 LTS | 1725 / 18145 |
Threadripper 1920X | 0:55, 0:57, 0:56 | 3.5GHz, 12-cores, 195W, CPU ~$700, Ubuntu 16.04 LTS | 1978 / 18285 |
Ryzen7-1700 | 0:52, 0:52 | 8-cores, 3.0GHz, 4 of 2400MHz RAM, Ubuntu 18.04 with 4.15.0-42 kernel, Dec 16, 2018 | 1775 / 13750 |
0:51, 0:51, 0:55 | 8-cores, 3.0GHz, 4 of 2133MHz RAM, Ubuntu 18.04 with 4.15.0-20 kernel, May 2, 2018 | ||
0:58, 0:57, 0:57 (118W power draw) | 8-cores, 3.0GHz, 4 of 2133MHz RAM, Ubuntu 17.10 with 4.13.0-37 kernel | ||
i7-8700K | 0:60, 0:55, 0:55 | 6-cores, 3.7GHz, Ubuntu 17.10, 8G DDR4-2666 * 2, 2 of Titan GX GPUs | 2705 / 15995 |
i7-8700 | 0:52, 0:51, 0:52 | 6-cores, 3.2 to 4.6GHz, Ubuntu 18.04, 8G * 2, 109W running | 2779 / 15154 |
2 x Xeon X5680 | 1:00, 1:01, 0:58 | 6-cores, 3.33 to 3.6GHz, RHEL 8 | 1521 / 2 x 6795 |
2 x Xeon E5-2667 v2 | 1:05, | 8-cores, 3.4 to 4.0GHz, Sci Linux 7.6 | 2035 / 2 x 21339 |
i5-8400 | 0:63, 0:63, 0:64 | 6-cores, 1x16G RAM, 2.8 to 4.0GHz, Ubuntu 18.04 LTS, 4.15.0 kernel | 2335 / 11745 |
Dell R610 - 2 x Xeon X5680 | 0:63, 0:62, 0:63 | 2 x 6-cores, 96G RAM, 3.33 to 3.6GHz, RedHat Enterprise 8.6 , 4.18.0 kernel | 1515 / 6804 (each CPU) |
AMD EPYC 3201 | 1:48, 1:51 (3-2021) | 8-core, 1.5GHz, Boost 3.1GHz, Ubuntu 20.04 LTS | 2027 / 12677 |
i7-7700K | 1:07, 1:07 | 4-cores, 4GHz, Ubuntu 18.04 LTS Dec 2018 | 2585 / 12055 |
1:13, 1:16, 1:08, 1:08 | |||
i7-6700 | 1:17, 1:17, 1:18, 1:18 | 4-cores, 3.4GHz, Ubuntu 18.04 LTS June 2018 | 2156 / 10011 |
1:20, 1:19 | Dec 2018 | ||
i3-8100 | 1:35, 1:35 | 4-cores, 1x16G RAM, 3.6GHz, Ubuntu 18.04 LTS, 4.15.0 kernel | 2105 / 8090 |
1:33, 1:36, 1:34 | 2x16G RAM | ||
i7-3770 | 1:34, 1:34 | 4-cores, 3.4GHz, Ubuntu 18.04.02 | 2068 / 9284 |
i5-4590 | 1:52, 1:35, 1:35 | 4-cores, 3.7GHz, Ubuntu 18.04.5 | 2091 / 5317 |
i5-4590 | 1:40, 1:42 | 4-cores, 3.7GHz, RHEL 9 | 2091 / 5317 |
Xeon Gold 5120 | 1:35, 1:38 | 14-cores, 2.2GHz,CPU ~$3000, CentOS 7.4 | 1725 / 18145 |
i5-6500 | 1:37, 1:37, 1:36, 1:35 | 4-cores, 4x16G RAM, Ubuntu 18.04 LTS, 4.15.0 kernel | 1945 / 7235 |
Ryzen3-2200G APU | 1:46, 1:47, 1:49, 1:49, 1:47 | 4-core, 4-thread, 3.5GHz, Ubuntu 18.04 LTS 4.13.0-45 kernel, 33.6W in Ubuntu idle, 83.2W running nqueens_omp | 1820 / 7355 |
nuc7i7 i7-7567U | 2:25, 2:25, 2:24 | 2-cores, 2x8G RAM, Ubuntu 18.04 LTS | 2264 / 6497 |
i3-6100 | 3:11, 2:35, 2:39, 2:38 | 2-cores, 4x16G RAM, Ubuntu 18.04 LTS, 4.15.0 kernel | 2110 / 5495 |
AMD Phenom II x6 1090T | 2:41, 2:39, 2:42 | 6-cores, 3.2GHz, Ubuntu 18.04, July 2018 | 1220 / 5595 |
i3-4130 | 2:50, 2:53 | 2-cores, 3.4GHz, Ubuntu 18.04 LTS, 4.15.0-23 kernel, July 2018 | 1963 / 4793 |
i7-8700K | 3:18, 3:32 | 6-cores, 3.7GHz, Ubuntu 16.04, 4.10 kernel, 8G DDR4-2666 * 2, 2 of Titan GX GPUs, 80.5W idle, 156W running | 2705 / 15995 |
i7-8700 | 3:24, 3:20 | 6-cores, 3.2 to 4.6GHz, CentOS 7.4, 8G * 2, 68.5W running | 2779 / 15154 |
i3-3220 | 3:31, 3:28, 3:30 | 2-cores, 3.3GHz, Ubuntu 18.04 LTS, 4.15.0-23 kernel, July 2018 | 1760 / 4233 |
i7-7700K | 3:34, 3:41 | 4-cores, 4GHz, CentOS 7.4, Jan 2018 | 2583 / 12055 |
3:43, 3:31, 3:47 | 4-cores, 4GHz, CentOS 7.5, July 2018 | ||
AMD FX-8350 | 3:31, 3:26, 3:20 | 8-cores, 4.0GHz, Ubuntu 18.04, July 2018 | 1510 / 8950 |
Ryzen7-1700, 2 of 2666MHz RAM | 3:45, 3:51 (89.7W power draw) | 8-cores, 3.0GHz, CentOS 7.4 with 3.10.0-693 kernel | 1775 / 13750 |
Ryzen7-1700, 4 of 2133MHz RAM | 4:00 (only 83W power draw) | 8-cores, 3.0GHz, CentOS 7.4 with 3.10.0-693 kernel | |
Ryzen7-1700, 2 of 2666MHz RAM | 4:01, 4:01 (88.5 power draw) | 8-cores, 3.0GHz, CentOS 7.4 with 4.15.2 kernel | |
Ryzen7-1700, 4 of 2133MHz RAM | 4:03, 4:03 (only 83W power draw but 125W running CPU burn-in) | 8-cores, 3.0GHz, CentOS 7.4 with 4.15.2 kernel | |
Ryzen7-1700, 4 of 2133MHz RAM | 4:16, 4:09 (only 83W power draw but 105W in BIOS) | 8-cores, 3.0GHz, Ubuntu 16.04 LTS, 4.10 kernel | |
Ryzen7-1700, 2 of 2666MHz RAM | 4:21, 4:16, 4:09, 4:16, 4:06 (90.5W power draw) | 8-cores, 16-thread, 3.0GHz, Ubuntu 16.04 LTS, 4.10 kernel | |
Xeon E5410 16G RAM | 4:50, 4:47, 4:51 | 4-cores, 8-thread, 2.33GHz, Ubuntu 17.10 | 1000 / 3268 |
Ryzen7-1700 no threading, 4 of 2133MHz RAM | 5:59, 6:11 (power draw ~85W) | 8-cores, 3.0GHz, Ubuntu 16.04 LTS, 4.10 kernel | |
i7-6700 | 4:30, 4:30, 4:07 | 4-cores, 3.4GHz, CentOS 7.5, July 2018 | |
i7-3770 | 4:21, 4:08 | 4-cores, 3.4GHz, CentOS 7.4 | 2068 / 9284 |
Ryzen3-2200G APU | 5:04, 4:37 (nqueens_omp 15: 53, 42 sec) | 4-core, 4-thread, 3.5GHz, Ubuntu 16.04 LTS, 4.10 kernel, ~74W power draw | |
i5-8400 | 5:13, 5:01, 6:09, 4:59 | 6-cores, 2.8 to 4.0GHz, CentOS 7.5, 3.10 kernel | |
5:12, 5:34 | 6-cores, 2.8 to 4.0GHz, CentOS 7.5, 4.17 ML kernel | ||
Ryzen3-2200G APU | 5:23, 5:55 | 4-core, 4-thread, 3.5GHz, CentOS 7.4 3.10.0 kernel, ~70W power draw | |
i5-4590 | 6:59, 6:59, 7:00 | 4-cores, 3.7GHz, CentOS 7.4 | 2091 / 5317 |
i5-3450 | 6:58, 6:46 | 4-cores, 4x16G RAM, 3.1 to 3.5GHz, CentOS 7.5 | 1856 / 6520 |
i3-6100 | 7:16, 7:09, 7:34 | 2-cores, 4x16G RAM, CentOS 7.5, 3.10.0-862 kernel | |
i3-8100 | 7:17, 7:17, 7:07 | 4-cores, 3.6GHz, CentOS 7.5, 3.10 kernel | |
i3-4170 | 7:22, 7:38 | 2-cores, 3.7GHz, Ubuntu 16.04 LTS, 4.10 kernel | |
7:27, 7:34 | compiled on Ubuntu 16.10, 2-cores, 3.7GHz, Ubuntu 16.04 LTS, 4.10 kernel | ||
7:32, 7:33 | 2-cores, 3.7GHz, CentOS 7.5 3.10 kernel | ||
i5-6500 | 8:17, 8:33 Other SW running | 4-cores, 4x16G RAM, CentOS 7.5, 3.10.0-862 kernel | |
i3-4170 Hyper-Threading disabled | 11:04, 10:31 | 2-cores, 3.7GHz, Ubuntu 16.04 LTS, 4.10 kernel |
An identical system had it's CPU switched from i7-6770 to i7-7700K. The 7700K had a 25% higher clock but that was not exploited until the CentOS 7 kernel was updated to 4.10.10 from the stock 3.10. For performance testing a simple Quartus 15.0 project was compiled (1, 2, 4 at a time).
Summary:
Server specifications:
Server, Specs | Networking | File Source | Num. Compiles | Build Time | |
---|---|---|---|---|---|
eceLinux2 i7-6770 kernel 3.10 | 1Gb/s | eceServ1 | 1 | 80, 79, 79 | Reference |
2 | 1:26 | ||||
4 | 1:56, 1:45 | ||||
eceLinux2 i7-6770 kernel 4.10.10 | 1Gb/s | eceServ1 | 1 | 49, 44 | 44% faster |
2 | 48, 46, 46 | ||||
4 | 58, 59 | ||||
eceLinux2 i7-6770 kernel 4.10.10 | 1Gb/s or 10Gb/s | local SSD | 1 | 37, 37, 37, 37, 37 | 50% faster |
2 | 37, 38, 37, 39, 39 | ||||
4 | 49, 51, 53, 50, 51 | ||||
eceLinux1 i7-6770 kernel 4.10.1 | 1Gb/s | eceServ1 | 1 | 47, 45, 45 | 43% faster |
2 | 46, 45 | ||||
4 | 58, 63, 58 | ||||
8 | 66, 55 | ||||
eceLinux1 i7-6770 kernel 4.10.1 | 10Gb/s | eceServ1NEW | 1 | 50, 45, 48 | ?% faster |
2 | 47, 47, 46 | ||||
4 | 58, 62, 61 | ||||
eceLinux1 i7-7700K kernel 4.10.1 | 1Gb/s | eceServ1 | 1 | 60, 52, 49, 50 | SLOWER |
2 | 51, 61, 51 | ||||
4 | 70, 68 | ||||
eceLinux1 i7-7700K kernel 4.10.10 | 10Gb/s | eceServ2 | 1 | 45, 44 | 44% faster |
2 | 49, 48 | ||||
4 | 56, 55 | ||||
eceLinux1 i7-7700K kernel 4.10.10 | 10Gb/s | eceKVMserv | 1 | 40, 40, 39 | 50% faster |
2 | 40, 40, 39, 40 | ||||
4 | 45, 46, 46, 45 | ||||
8 | 1:21 | ||||
eceLinux1 i7-7700K kernel 4.10.10 | 10Gb/s | local SSD | 1 | 33, 33, 33 | 58% faster |
2 | 33, 33, 33 | ||||
4 | 42, 43 | ||||
eceLinux3 i7-3770 kernel 3.10.0 | 1Gb/s | eceServ1 | 1 | 58, 53, 55, 1:01, 55, 56 | ? |
2 | 56, 55, 55, 1:08, 57, 1:06 | ||||
4 | 1:08, 1:07, 1:07, 1:10, 1:09, 1:06 | ||||
eceLinux3 i7-3770 kernel 4.10.10 | 1Gb/s | eceServ1 | 1 | 1:51, 51 | Unpredictable |
2 | 2:51, 50, 2:50, 1:50, 51 | ||||
eceLinux3 i7-3770 kernel 3.10.0 | 1Gb/s | local SSD | 1 | 46, 46 | ? |
2 | 51, 47, 48 | ||||
4 | 1:01, 1:02, 1:02 | ||||
eceLinux9 VM 1-core i5-6500 kernel 3.10.0, 8G RAM | 1Gb/s | eceServ1 | 1 | 1:06, 1:13, 1:07 | ? |
2 | 1:51, 1:53 | ||||
eceLinux9 VM 1-core i5-6500 kernel 4.10.10, 8G RAM | 1Gb/s | eceServ1 | 1 | 1:14, 1:01, 1:00, 59 | ? |
2 | 1:45 | ||||
eceLinux9 VM 2-core i5-6500 kernel 4.10.10, 8G RAM | 1Gb/s | eceServ1 | 1 | 59, 59, 58 | ? |
eceLinux9 VM 1-core i5-6500 kernel 4.10.10, 8G RAM | 10Gb/s | eceServ1NEW | 1 | 1:11, 1:02 | ? |
2 | 1:51, 1:52, 1:50 | ||||
eceLinux5 i5-8400 July 2018 | 1Gb/s | eceServ1 | 1 | 1:04, 0:53 | |
2 | 1:01, 1:03 | ||||
4 | ? |
FIO tests to various file servers
fio --rw=randread --bs=64k --numjobs=4 --iodepth=8 --runtime=30 --time_based --loops=1 --ioengine=libaio --direct=1 --invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap --exitall --name task1 --filename=/home-fast/1.txt --size=10000000 eceKVMserv: READ: io=3732.8MB, aggrb=127349KB/s, minb=31801KB/s, maxb=31897KB/s, mint=30009msec, maxt=30014msec eceServ2: READ: io=33608MB, aggrb=1120.2MB/s, minb=286662KB/s, maxb=286981KB/s, mint=30001msec, maxt=30002msec eceServ1: READ: io=3325.8MB, aggrb=113444KB/s, minb=28348KB/s, maxb=28387KB/s, mint=30008msec, maxt=30019msec eceServ1NEW: READ: io=25806MB, aggrb=880783KB/s, minb=219712KB/s, maxb=220599KB/s, mint=30001msec, maxt=30002msec eceKVMserv: READ: io=3.7GB, aggrb=127MB/s, minb=32MB/s, maxb=32MB/s eceServ1: READ: io=3.3GB, aggrb=113MB/s, minb=28MB/s, maxb=28MB/s [1Gb/s] eceServ2: READ: io=34GB, aggrb=1.1GB/s, minb=287MB/s, maxb=287MB/s eceServ1NEW: READ: io=26GB, aggrb=881MB/s, minb=220MB/s, maxb=221MB/s [10Gb/s]
Windowx XP, Antec 80Plus power supply, Asus M5A88-M AM3+ motherboard, 32G DDR3, SATA HD & DVDROM. Quartus compile is a DE2 demo circuit and simple ECE 124 stop-light circuit.
The Linux Power Draw column include the computer specs above + Adaptec 6405e RAID card, with an SSD in addition to the SATA HD above
Student lab computers use the 560 CPU, many of the 270's are available.
AMD CPU Model | Quartus Compile Demo / Stoplight | BIOS Power | Windows XP Power Draw | Computers With This CPU | Linux Power Draw |
---|---|---|---|---|---|
Phenom II 560 2-core 3.3GHz | 18 / 29, 27, 28, 25 sec | 83W | 48.5W | eceLinux6,7,8,9.10, Mail, Arbeau, ieee | 73W |
445 3-core | 25 sec / crashes | 68W? | 46.0W | eceLinux12 | 68W |
FX4300 4-core, 3.8GHz | ? / 26, 33, 26 sec | 62W? | 44.5W | 62W | |
270 2-core 3.4GHz | ? / 26, 27, 26 sec | 80.0W | 44.5W | 65.4W | |
250 2-core 3.0GHz | ? / 43, 44, 36, 41 sec | ? W | 44.1W | ||
Phenom II 550 2-core | 21 sec / ? | ? | ? | ||
1090T 6-core | ? | ? | ? | wo32 | |
1055T 6-core | ? | ? | ? | kvm | |
FX 8350 8-core | ? | ? | ? | eceLinux1,2,3 |
It was noticed that the Asus H97M-E motherboards, with SSDs, were signif. slower than 4 year old Asus P8H77-M motherboards and this holds for Win 7 and Win 10 even after trying BIOS and driver tweaks / updates. Windows performance tests of the SSD, RAM and CPU showed good performance.
The performance tests with Linux were done using CentOS 6, Quartus 10.1 targeting Cyclone II (auto) from /praetzel/QuartusTest101/lab1
Linux Quartus 13? targeting Cyclone II and IV, project on NFS share 1Gb/s networking | ||||
---|---|---|---|---|
Motherboard | OS | CPU | Quartus Compile Times Seconds | |
P8H61 | CentOS 6 | i3-2120 3.2?GHz | 12, 12, 12 Cyclone IV GX 20, 22, 21, 20 | |
H97M | CentOS 6 | i3-4170 3.7GHz | 11, 12, 11 Cyclone IV GX 19, 17, 19, 18 | |
H170M | CentOS 6 | i3-6700 3.7GHz | 11, 12, 12 Cyclone IV GX 19, 19, 18, 18 | |
P8H77-M ece-public20 | Win 7, SSD | i3-3220 | 18, 18, 19 SSD: 12, 12, 12 1Gb/s: 14, 14, 14 ieee CIFS: 15, 15, 15 | |
Windows Quartus 15.1 targeting Max 10, project on N: cifs share, 100Mb/s networking | ||||
H97M ece-mcu20 | Win 7 | i3-4170 | 3:28, 2:00, 1:56 | |
H97M ece-mcu20 | Win 10 | i3-4170 | 2:23, 1:53, 2:19 SSD: 2:01, 2:00, 2:04 | |
H81M-C | Win 10 | i3-4150 | 3:14, 3:06, 3:07 SSD: 2:50, 2:54 | |
P8H77-M ece-fpga29 | Win 10 | i3-3220 | 1:36, 1:37, 1:34 | |
P8H77-M ece-rtos7 | Win 7 | i3-3220 | 1:47, 1:29, 1:29 | |
P8H77-M ece-cpuio29 | Win 7 1Gb/s eth | i3-3220 | 1:01, 0:53, 1:00 | |
P8H77-M ece-cpuio29 | Win 7 1Gb/s eth | i3-3220 | 1:01, 0:53, 1:00 | |
P8H77-M ece-fpga29 | Win 10 - April 21, 2017 Fresh install | i3-3220 | N: 1:57, 1:58 C: 1:57, 1:43 | |
H97M ece-mcu20 | Win 10 - April 21, 2017 Fresh install | i3-4170 | n: 3:09, 2:57 C: 2:37, 2:38 | |
May 17, 2017 Network upgrade to 1Gb/s | ||||
Motherboard | OS | CPU | Quartus Compile Times Seconds | |
H97M ece-mcu17 | Win 7 | i3-4170 1Gb/s | N: 2:27, 2:18 | |
SSD: 1:55, 1:51, 2:06 | ||||
H97M ece-mcu18 | Win 7 | i3-4170 1Gb/s | N: 1:10, 2:54, 2:51, 2:42 | |
SSD: 2:31, 2:44, 2:44 | ||||
H97M ece-mcu19 | Win 7 | i3-4170 1Gb/s | N: 2:12, 1:35, 1:44 | |
HDD: 1:33, 1:26, 1:33 |
This was a simple test to evaluate if I should buy 1 or 2 sticks of RAM in the P8H77-M Asus M/B using DDR3-1333 4G sticks and the Windows Performance test:
1 of 4G RAM: 5.9
2 of 4G RAM: 7.5
4 of 4G RAM: 7.5
Feb 2014 - Upgrading from Asus M2Nmx-SE Plus and Win 7 - Either use a new image and re-install or get all but two drivers (PCI, Unknown) from the install USB key. Then install the setup.exe program in the Video and MEI directories under drivers.
Upgrading from Asus M2Nmx-SE Plus and Win 7 - simply replace the motherboard, go from DHCP to fixed IP until DHCP has a new MAC set
Computer Name | Windows Performance Score | Specs | |||||
---|---|---|---|---|---|---|---|
CPU | RAM | Graphics | Game Graphics | Hard Drive | Motherboard | Other Specs | |
Asus M2N PV VM Motherboard, AMD AM 2 CPU | |||||||
Control6 | 4.4 | 5.7 | 3.5 | 3.0 | 5.7 | ||
Control7 | 4.4 | 5.7 | 3.4 | 3.0 | 5.7 | ||
Control9 | 4.4 | 5.7 | 2.9 | 3.0 | 5.7 | ||
Asus M2N SE Plus Motherboard, AMD AM 2+ CPU, DDR 2 | |||||||
lab10 | 4.4 | 5.1 | 3.3 | 3.2 | 5.9 | M2N SE Plus | 160G HD, 4G DDR2, LE 1640 2.6GHz x1 |
4thYear0 | 4.9 | 4.9 | 3.7 | 3.3/3.2 | 5.9 | M2N SE Plus | 160G HD, 4G DDR2 800MHz, AMD 6100 3.1GHz x2 |
motor2 | 6.5 | 4.9 | 3.7 | 3.3 | 5.9 | M2N SE Plus | 160G HD, 4G DDR2 |
motor8 | 6.5 | 4.9 | 3.7 | 3.2 | 5.9 | M2N SE Plus | 160G HD, 4G DDR2 |
motor22 | 4.9 | 5.5 | 3.2 | 3.5 | 5.8 | M2N SE Plus | 160G HD, 4G DDR2, "4400" CPU x2 2.2GHz |
motor22 | 6.7 | 5.9 | 3.5 | 3.2 | 5.8 | M2N SE Plus | 160G HD, 4G DDR2, CPU "270" x2 3.4GHz |
motor13 | 6.5 | 5.9 | 3.6 | 3.2 | 5.8 | M2N SE Plus | 160G HD, 4G DDR2, CPU "250"? 3.1GHz x2 |
motor17 | 6.5 | 5.9 | 3.7 | 3.3 | 5.8 | M2N SE Plus | 160G HD, 4G DDR2 |
Asus M3A78 Motherboard, AMD AM 3 CPU, DDR 2 | |||||||
circuits33 | 6.7 | 5.9 | 3.2 | 5.1 | 5.8 | M3A78-CM | DDR2 4G, 160G HD, AMD "270" 3.4GHz x2 |
Asus M4A785T Motherboard, AMD AM 3 CPU, DDR 2 | |||||||
cpuio7 | 6.7 | 5.9 | 4.4 | 5.4 | 5.9 | M4A785T-M | DDR3 1033 or 1333 4G, 160G HD, AMD 2.7 or 3.1 or 3.4GHz x2 |
cpuio8 | 6.7 | 5.9 | 4.1 | 5.3 | 5.9 | M4A785T-M | DDR3 1033 or 1333 4G, 160G HD, AMD 2.7 or 3.1 or 3.4GHz x2 |
cpuio1 | 6.7 | 5.9 | 4.1 | 5.2 | 5.9 | M4A785T-M | DDR3 1033 or 1333 4G, 160G HD, AMD 2.7 or 3.1 or 3.4GHz x2 |
cpuio3 | 6.7 | 5.9 | 4.4 | 5.3 | 5.9 | M4A785T-M | DDR3 1033 or 1333 4G, 160G HD, AMD 2.7 or 3.1 or 3.4GHz x2 |
Asus M5A88-M Motherboard, AMD AM 3+ CPU, DDR 3 | |||||||
public01 | 6.6 | 7.4 | 4.3 | 5.4 | 5.9 | M5A | AMD Black x2 "560" 3.3GHz |
public02 | 6.6 | 7.4 | 4.3 | 5.4 | 5.9 | M5A | AMD x2 "560" Black 3.2GHz |
public06 | 6.6 | 7.4 | 4.4 | 5.5 | 5.9 | M5A | AMD x2 "560" Black 3.2GHz |
Asus P8H77-M Motherboard, Intel i3 CPU, DDR 3 | |||||||
rtos1 | 7.2 | 7.5 | 1.0 | 1.0 | 5.9 | P8H77-M | DDR3 4G 1333 or 1600MHz, 160G or 500G WD HD |
rtos2 | 7.2 | 7.5 | 1.0 | 1.0 | 5.9 | P8H77-M | DDR3 4G 1333 or 1600MHz, 160G or 500G WD HD |
rtos3 | 7.1 | 7.5 | 1.0 | 1.0 | 5.9 | P8H77-M | DDR3 4G 1333 or 1600MHz, 160G or 500G WD HD |
ecestaf79 | 7.1 | 7.5 | 5.3 | 5.8 | 5.9 | P8H77-M | DDR3 8G 1333, 500G WD HD |
Asus H81M-C Motherboard, Intel i3-4130 CPU, 2 x 4G DDR 3 | |||||||
testing | 7.3 | 7.6 | 6.6 | 6.6 | 5.9 | H81M-C | DDR3 2 x 4G 1333, 160G WD HD |
Altera Quartus is our heaviest software and is used as a performance indicator.
For each test case several runs were performed. The first is often garbage.
On Centos 5.11 using the ECE 327 tools - compiling multiple ECE 327 Heating System circuits.
Winter 2015 the file server was upgraded to RAID 10 with SSD in parallel with 10k rpm, April 2014 tests were using 15k rpm SAS RAID 10
# Compiles | FX8350 8-core | i7-6700 | i7-3770 Apr 2014 | i7-3770 Wierd! | Celeron G1620 May 2014 | i3-2120 May 2014 | i3-4130 May 2014 | Celeron G3420 May 2014 | May 2014 |
---|---|---|---|---|---|---|---|---|---|
1 | 0:29 | 0:19 | 0:23 | 0:50 | 0:31 | 0:27 | 0:24 | 0:25 | |
2 | 0:30 | 0:18 | 0:22 | 0:57 | 0:34 | 0:28 | 0:25 | 0:28 | |
4 | 0:33 | 0:19 | 0:22 | 1:14 | 0:58 | 0:39 | 0:35 | 0:42 | |
8 | 0:43 | 0:32 | 0:30 | 1:25 | 1:45 | 1:10 | 0:59 | 1:22 | |
16 | 1:21 | 1:03 | - | 1:02!! | - | - | - | - |
On CentOS 5.6 Quartus 10.1 comparing AMD 6-core 3.2GHz 1090T to 8-core 4.0GHz FX 8350. Enabling the overclocking utility on the M5A88-M motherboard made no performance difference.
# compiles | 8-core seconds | 6-core 1090T seconds |
---|---|---|
2 | 46 | 56 |
4 | 55 | 60 |
8 | 70 | 73 |
16 | 120 | 118 |
This is a test compiling the ECE 423 Cyclone V project on Quartus 15.1.
The i3-4170 on the Asus H97M-C are slow for Win 7 and Win 10 only - excellent Linux performance - Driver Issue??
OS | #Cores, base clock, boost clock | CPU Cost | Setup | Synthesis Time Hour:Min:Sec | Passmark Single Threaded Perf |
---|---|---|---|---|---|
Win 10 | 6, 3.2GHz, 4.6GHz | $?? | i7-8700, AORUS Z370 Ultra Gaming MB, on 500G HDD | 19:06, 18:43, 18:45 min:sec | 15,240 / 2540 |
using local 250G SSD | 17:48, 17:37, 17:53 | ||||
using samba share to ieee | 20:08, 20:15, 20:06 | ||||
Win 10 1803 | 4, 3.2GHz, 4.6GHz | - | i7-8700, H310I-PLUS MB, on 500G Samsung 970 SSD | 18:20, 18:06, 23:09 | 15154 / 2779 |
on N: drive | 19:54, 23:14, 19:49, 20:38 | ||||
Win 10 | 4, 4.2GHz, 4.5GHz | $470 | i7-7700K, H170M MB, on 250G SSD | est. 19 min | 12,130 / 2580 |
Windows 10 1709 | 6, 2.8 to 4.0GHz | $225 | i5-8400, 2x8G, Asus H310Mi-Plus | 20:02, 20:01 | |
i5-8400, 2x16G, Asus H310Mi-Plus | 19:32, 19:28, 19:37 | ||||
Windows 10 1709 | 4, 3.6GHz | $167 | i3-8100, 1x16G, Asus ROG Strix H370-I | 22:16, 22:14 (Are these correct? 20:21, 20:23, 21:18 from the Asus H310Mi-Plus MB?) | |
i3-8100, 2x16G, Asus ROG Strix H370-I | 22:10, 21:45, 21:48, 21:44 | ||||
Win 10 | 4, 3.4GHz, 4.0GHz | $400 | i7-6700, H170M MB, on 250G SSD | 24:09, 22:33, 21:15, 21:09, 21:10, 21:08 | 10,010 / 2150 |
Win 10 | 4, 3.2GHz, 3.6GHz | $270 | i5-6500, H170M MB, on 250G SSD | 25:02, 22:57, 22:36, 22:28, 22:31 | 7230 / 1950 |
Win 10 + Avast | 22:56, 22:36, 22:35 | ||||
Win 10 | 4, 3.2GHz, 3.6GHz | $270 | i5-6500, H170M MB, on 500G HD | 23:24, 22:56, 22:37 | 7225 / 1950 |
Win 10 | 2, 3.5GHz, 4.0GHz | $NA | Intel NUC i7-7567U, on NVMe 512G M.2, 32G RAM (MME) | 25:40, 24:40, 24:39 | 6542 / 2267 |
Win 10 | 4, 2.6GHz, 3.5GHz | $NA | Intel NUC i7-6770HQ, on NVMe 256G M.2 | 26:00, 25:51, 25:58 | 9690 / 1903 |
Win 10 | 2, 3.5GHz, 4.0GHz | $NA | Intel NUC i7-7567U, on NVMe 256G M.2 (ECE) | 27:45, 26:58, 27:28, 27:23, 26:08 | 6520 / 2260 |
Win 10 | 2, 3.8GHz, - | $232 | i3-6300, H170M MB, 2 x 16G DDR4, on 250G SSD | 26:39, 26:58, 26:35, 26:46, 26:11 | 5850 / 2165 |
Win 10 | 2, 3.8GHz, - | $232 | i3-6300, H170M MB, 1 x 16G DDR4, on 250G SSD Win setting "performance" no graphical effects | 28:55, 26:29, 26:18 | 5850 / 2165 |
Win 10 1709 | 4, 3.5GHz, 3.7GHz | $150 | Ryzen-3 2200G, 16G 2666MHz, MSI B350M PRO-VDH MB, on 250G SSD | 27:25, 25:11, 25:09 | ? / ? |
Win 10 | 8, 3.4GHz, 3.8GHz | $480 | Ryzen-7 1700X, Prime X370-Pro MB, on 250G SSD | 27:55, 27:05 | 14,640 / 1865 |
Win 10 | 8, 3.0GHz, 3.7GHz | $415 | Ryzen-7 1700, Prime X370-Pro MB, on 250G SSD | 28:37, 28:44, 28:35, 28:37 | 13,790 / 1765 |
Win 10 | 8, 3.0GHz, 3.7GHz | $415 | Ryzen-7 1700, 64G 2133MHz, MSI B350M PRO-VDH MB, on 250G SSD | 26:18, 25:35, 25:32 | 13,790 / 1765 |
Win 10 | 12, 3.5GHz, 4.0GHz | $1,200 | Threadripper 1920X, X399 AORUS Gaming 7 MB, on 500G HD | 27:59, 27:23, 27:36 | 19455 / 2029 |
Win 10 | 12, 3.5GHz, 4.0GHz | $1,200 | Threadripper 1920X, X399 AORUS Gaming 7 MB, on 250G SSD | 28:02, 27:17 | |
Win 10 | 2, 3.7GHz, - | $155 | i3-6100, H170M MB, on 250G SSD running lots of other SW | 32:07, 29:42, 30:34 | 5485 / 2105 |
running little or no other software | 29:01, 27:51 | ||||
using 500G 7200rpm HDD | 28:50, 29:07, 29:13 | ||||
Win 10 | 2, 3.3GHz, - | $NA | i3-3220, P8H77-M MB, on 500G HDD | 59:39, 1:00:24 | 4225 / 1760 |
Win 10 | 2, 3.7GHz, - | $NA | i3-4170, H97M-C MB, on 500G HDD | 1:30:13, 1:31:31, 1:31:04 | 5180 / 2130 |
Win 10 | 2, 1.6GHz, 2.7GHz | $NA | Intel NUC i5-5250U, on NVMe 256G M.2, 16G DDR3L | 1:45:15 | 3630 / 1450 |
The circuit is trival - stoplight from ECE 124 targeting the Max 10 used on the LogicalStep board with Quartus 15.1 on CentOS 7 and Windows 10.
If the Windows 10 was on the Nexus domain or not did not matter. In the past that doubled synthesis times.
OS | Setup | Network | Synthesis Time seconds |
---|---|---|---|
Windows 10 21H2 (June 2023) NUC i7-1260P 2.1 to 4.7GHz, 2x8G, boot SSD | Local SSD | 1Gb/s | 35, 25, 25 |
Windows 10 1709 i5-8400 2.8 to 4.0GHz, 2x8G H310MI-Plus, boot SSD | Local SSD | 1Gb/s | 43, 40, 40 |
Windows 10 1709 i3-8100 3.6GHz, 1x16G ROG Strix H370-I, boot SSD | Local SSD | 1Gb/s | 45, 43, 43 |
Windows 10 i3-6100 3.7GHz, 16G H170MB, boot SSD | 1Gb/s | 43, 42 | |
Samba to ieee server | 61, 54, 50, 50, 50 | ||
NUC i5-5250U on nvme SSD, 16G DDR3L | 3:13, 3:08, 3:08 | ||
Samba to ieee | 3:22, 3:26, 3:23 | ||
using Nexus N: drive | 4:13, 3:52, 3:42 | ||
i5-6500 on SSD | 44, 41, 41, 41 | ||
Samba to ieee | 49, 50, 50, 49, 49 | ||
on Nexus using N: drive | 50, 46, 46, 45 | ||
H170, i3-6100 office Nexus machine | N: drive | 1Gb/s | 50, 51, 50 |
Samba to ieee | 57, 54, 56 | ||
Samba to eceServ1 | 61, 67, 64, 67!! | ||
H170, i7-7700K, 10Gb/s | Ubuntu 18.04 | 10Gb/s | 38, 38, 37, 37 |
local SSD | 34, 33, 32, 32 | ||
H170, i7-6700, 1Gb/s | N: drive | 1Gb/s | 52, 51, 52, 63 |
Samba to ieee | 47, 47, 46, 47 | ||
Samba to eceServ1 | 58, 61, 57, 58 | ||
local SSD | 53, 39, 39, 39, 39 | ||
eceLinux1, i7-6700, 64G, 1Gb/s | NFS to eceServ1 | 1Gb/s | 39, 44, 40, 42 |
local SSD | 38, 37, 36, 36 | ||
eceLinux9, Ryzen 1700X CentOS 7, 64G 2133MHz KVR, 10Gb/s | NFS to eceServ1 | 10Gb/s | 44, 44, 42, 43 |
local HDD | 38, 37, 38, 39 | ||
Ryzen-7 1700 Win 10, 64G 2133MHz KVR, 1Gb/s | Samba to ieee | 1Gb/s | 58, 54, 54, 54 |
ECE P: drive | 71, 62, 60, 59 | ||
Nexus N: drive | 56, 55, 53 | ||
local SSD | 49, 49, 48 | ||
VM CentOS 7 on i5-6500 3.2GHz with VM on FreeNAS via NFS | VM has 2 cores | 61, 51, 52, 51 | |
VM has 4-cores | 52, 51, 51 | ||
Intel NUC i7-7567U 2-core 2.6 3.5GHz boost (MME) | local NVMe M.2 | 1Gb/s | 40, 39, 40 |
Samba to ieee | 50, 51, 51, 56 | ||
Intel NUC i7-6770HQ 4-core 2.6 to 3.5GHz | local NVMe M.2 | 1Gb/s | 42, 41, 41, 41 |
Intel NUC i7-7567U 2-core 3.5 to 4.0GHz (ECE) | local NVMe M.2 | 1Gb/s | 45, 43, 43, 43 |
Samba to ieee | 54, 53, 53, 53 | ||
Intel Gold 5120 Xeon 14-core 2.2 to 3.2GHz running Ubuntu 18.04 LTS | NFS eceServ1 | 10Gb/s | 55, 54, 54 |
local SSD | 49, 51, 49, 50 | ||
Intel Gold 5120 Xeon 14-core 2.2 to 3.2GHz running CentOS 7.3 Linux with 3.10.0-693.11.6 kernel does not clock past 2.2GHz | NFS eceServ1 | 1Gb/s | 57, 57, 58 |
local SSD | 74, 51, 53, 53 | ||
AMD Threadripper 12-core 3.5 to 4.0GHz running CentOS 7.3 Linux | local 500G HDD | 1Gb/s | 52, 36, 36, 35 |
Samba to P: drive | 41, 40, 39 | ||
AMD Threadripper 12-core 3.5 to 4.0GHz running Win 10 Edu | local 500G HDD | 1Gb/s | 42, 43, 43, 42 |
local SSD | 44, 42, 42 | ||
Samba to ieee | 48, 48, 50, 48 | ||
Intel i7-8700 6-core 3.2 / 4.6GHz boost running Win 10 Edu. | local 500G HDD | 1Gb/s | SSD: 39, 39, 38, 39 HDD: 42, 39, 38, 39 |
Samba to ieee | 44, 43, 44, 43 | ||
Intel i7-8700 6-core 3.2 / 4.6GHz boost running CentOS 7 3.10.0-693 kernel | local 500G HDD | 1Gb/s | 50, 37, 37, 38, 37 |
NFS to eceServ | 41, 41, 42 |
This is compiling a trivial circuit - 3 LE's and 14 I/O pins.
To use ModelSim ASE Tools -> Options -> EDA Tool Options and then set ModelSim Altera to c:\Software\Altera\13.1\modelsim_ase\win32aloem
Quartus Version | FPGA Family | Synthesis Time seconds | Memory Use |
---|---|---|---|
9.0, 10.1 x86 | Cyclone II | 25 | 200M |
13.0 x86 or x64 | Cyclone II, III, IV | 26 to 33 | 300M |
13.1 x64 | Cyclone II, III, IV | 30 to 33 | 300M |
13.1 x64 | Cyclone V | 83 to 99 | 1.1G |
Using CentOS 6.2 x64 with Quartus 11
Computer | Power Use 80Plus Bronze 2W when off/standby | Number of simul. compiles | ||||
---|---|---|---|---|---|---|
BIOS | Linux | 1x | 2x | 4x | 8x | |
Intel i5-2320 on P8Z77-V LX programs are on NFS server Sparkle ATX-450 PN PS | 60.4W | 51.5W | 39,38 | 42,41 | 49,42 | 1:08, 1:08 |
Intel i5-2320 on P8Z77-V LX programs are on local HD Sparkle ATX-450 PN PS | 60.4W | 51.5W | 34,34 | 37,37 | 42,41 | 1:03, 1:05, 1:04 |
Intel i5-2320 on P8Z77-V LX Sparkle ATX-450 PN PS | 60.4W | 51.5W | 38 | 41 | 42 | 1:08, 1:08 |
Intel i3-2100 on P8H77-M | 40 | 27W Win 7 (65W pk) | 38 | 42,42 | 53,56 | 1:30, 1:30 |
AMD "550" on M4A785T-M | 78W | 43W (Win XP) | 47, 49 | 49, 50 | 1:16, 1:19 | 2:02, 2:12 |
AMD "560" on M5A88-M | 81W | 56W (Win 7) | ||||
AMD "1090T" on M5A88-M | 112W | 89W (160W pk) | 49, 53 | 51, 53 | 56, 56 | 64, 61 |
Quartus 10.1 x64, Win 7, 4G RAM, M2N PV-VM, DDR2 | Q11.1 x64, Win 7, 4G RAM, M5A88V | Q9.0 x32, Win XP, 4G RAM, M3A785 | Q 10.1 x64 Win 7, Intel i3 4G RAM, P8H77-M | Q10.1 x64 Win 7, 4G RAM, M5A88-M | ||||
---|---|---|---|---|---|---|---|---|
Disk | 2.2GHz x1 | 2.5GHz x2 | 2.5G x2 + SAV | 2.7GHz LE1640 | "250" 3.0GHz x2 DDR3 | "270" 3.4GHz x2 DDR2 | i3-2100 3.1GHz x2 3M cache, DDR3 | "560" 3.3GHz x2 DDR3, 7M cache |
Nexus N: | 33,39,38 | 25,26 | 30,28,25,30 | 39,29,29,29 | 31,32,32 | 39,27,24,25 Win7 Q10.1 is 20, 20,19,19 | 20, 21, 22, 21 i3-2120 is 18, 17, 18 | 25, 21, 20, 19 |
Local HD | 26,26 | 24,18,19,18 | 19,19 | - | 13,13,13 | 23,21 | 12, 12 | 12, 12, 12 |
Linux IEEE (single HD) | 34,32 | 22,23,23 | - | - | 43,26,25 | 25,25,25 | 16, 18, 17, 16 | 18, 18, 18 |
Linux P: (SAS 15k rpm RAID 6) | 36,34 | 25,24 | 25,25 | - | - | - | ||
USB Key | 55,56 | 47,43 | - | - | 40,44,40 | 41,41,43 | 34, 36, 37 | 35, 34, 34 |
Hardware | RAM | Q 9.0 x32 Win XP | Q10.1 x32 Win XP | |
---|---|---|---|---|
Eric's Nexus 3.1GHz x2 | 3G | 37, 28, 26 | 35, 23, 25 | |
Q9.0 x64 3.1GHz x2 | 2G | 23, 11, 10, 10 | ||
2.6GHz 1-core | 2.5G | 33, 23, 21, 21 | ||
2G | 34, 23, 25 | 37, 23, 22 | ||
1G | 34, 26 | |||
512M | 166 !! | |||
2.5GHz 2-core | 1G | 33, 23, 21, 21 | 46, 22, 22 | |
2G | 32, 21, 22 | 38, 19, 19 | ||
3.1GHz 2-core | 2G | 28, 17, 18 | 34, 14, 15 | |
4G | 28, 17, 17 | 34, 14, 15 | ||
2.3GHz 2-core, M2N-PV, Win 7 | 4G | 32, 22, 22 | 19, 19 | |
2.2GHz 1-core, M2N-PV, Win 7 | 4G | 38, 27, 31, 27 | 28, 26, 26, 26 |
If having issues with Remote Desktop (RDP) to EngTerm have the user delete the file default.rdp from the Documents folder - the file is likely hidden so turn on show hidden files under the view folder options.
Users install make but it doesn't follow them. MobaXterm unpacks itself into the user profileAppData/Local/Temp/Mxt111 or Roaming ???. If make is installed it's 100MB - too big for the profile.
One option is to pre-install the tools and unzip it into C:\software\mobaxterm-root - but will it work as read-only? - July 2019
The correct driver for our RS485 devices is NOT the Prolific driver but the driver in the file SCADA-driver-i-756x_1223_ driverinstaller.exe In Windows the device should show up as i-756x driver. If the Prolific driver is detected (it has the same PID and VID 2303, 067B) then right click and delete the driver. Note: The Prolific driver seems to be downloaded by Windows automagically. Apr 2019
Users can start ETap but when running a simulation an error, related to the database such as:
Failed to connect to database" ETap 18
"exception retrieve database version information ... SQL cannot create automatic instance 0x89C50118" on ETap 16.0 or 16.1
The fix:
ETAP 14 and higher ETAP versions use SQL 2012 Express Local DB for project database.
It seems the SQL instance require to connect to the ETAP projects is failing.
Please double click to run the file 'CleanUpLocalDB.cmd' located at Folder
1) C:\ETAP 1800\Other\CleanUpLocalDB.cmd
Simulations result in black output. The problem is mentioned here https://www.comsol.com/support/knowledgebase/933/
and the fix is:
The quickest solution is to switch to software rendering: Start COMSOL Multiphysics. To open the Preferences dialog box, in the COMSOL Desktop: Windows users: From the File menu, select Preferences. Cross-platform (Mac and Linux) users and COMSOL Version 4.0 to 4.3b: From the main menu select Options>Preferences. In the Preferences window select Graphics and Plot Windows (Version 4.4 and later) or Graphics (Versions 4.0 to 4.3b) and set the Rendering option to Software. Click OK and close the COMSOL Desktop.
7 June 2018 - Ryzen 3 2200G works with Ubuntu 16.04 LTS or newer and I can see the BIOS - but only with direct connection to the monitor - not thru the KVM switch I always use.
As of March 2018 - Ryzen 3 2200G APU will not turn on the video in an MSI B350M PRO-VDH motherboard. Switching between Ryzen 7 and 2200G requires resetting the BIOS. It will boot and run Linux fine with the 2200G.
CentOS 7.4 is unstable on the Ryzen 7 & MSI B350M PRO-VDH motherboard crashing at least daily with a CPU core "stuck".
Ubuntu 16.04 with 4.13 kernel is stable with the Ryzen 7. About 6 months previous CentOS 7.3 was stable with Ryzen.
Virt IO Advanced -> Performance ->Cache NONE to allow migration.
Just add date_default_timezone_set ( "America/Toronto" ); before time functions
Use this to find the settings: mysqld --verbose --help
max_connections = 151 , open_files_limit = 5000
We're hitting the open_files limit regularily
Note that checking the settings shows them with "-" instead of "_". Ie max-connections 151
Feb 2014 - sftp to the Linux machines fails if the account is configured for ECE327. The bash or csh printing the message about the account configuration for ECE 327 makes sftp fail with "Received message too long 1229866575". The solution is to wrap the 327 setup script:
if ( $?TERM == 1 ) then # set noglob source /home/ece327/setup-ece327.csh # unset noglob endif
As of 2006 I'm seeing two failure modes for the Asus P8H77-M motherboard:
In all cases I don't see failing capacitors. Out of about 120 installed, 3 have failed totally, 2 have lost the NIC and 4 are hanging - within 4 years.
The two 820uF caps by the SATA connectors puff and fail starting 2.5 years after being put into service. It only seems to happen to the capacitors stamped with a "+" on the top (Feb 2013). Failure of the caps results in a failure of the motherboard to boot. Replace them and all motherboards work again.
Enabling CPU Virtualization seems to solve the interrupt issue (mouse or keyboard not working if they're USB) which can plague the last BIOS
Caps by the voltage regulator fail - half of the time destroying the motherboard.
P3 systems are 600MHz to 1GHz with 256M to 384M of RAM. P4 systems are
1.5 to 2.4GHz with 512M ram. A Comsol 3D simulation was run on all machines.
P3's -- 19 to 25 seconds
P4's -- 8 to 10 seconds
Note that Intel has had the rug pulled from them by AMD and are moving fast to catch up. They've "killed" their Pentium line of processes and are now calling them "core". These are due to come our real soon now. I believe that they've, finally, wrapped the Intel Mobile power saving features into their desktop CPUs to reduce power consumption. PROBLEMS -------- 1) Only one serial port and it doesn't come out of the case by default. We need 2 serial ports for the Coldfire computers. We can do this via a dual serial port card ($52 per card). Another possible solution may be USB <--> Serial adapters but this is very unlikely as the serial communication programs do not yet support USB. 2) Fedora Core 5 installed and worked well - AFTER I droped in a supported network card and did a full OS update. TESTS ----- AMD system works with Norton Ghost (finding NDIS drivers was awkward and the boot CD had to be manually massaged). AMD system works with auto and locked network speed/duplex. AMD system seems to automatically use power saving features with Fedora Core 5, and it's easy to add with Windows (enable Minimal Power Saving Mode after installing Power Now. Quartus Performance Tests - Winter 2014 ---------------------------------------- While considering FPGA boards to replace the DE2 it was discovered that synthesis times were dramatically larger for newer boards. The circuit being compiled was close to trivial - just a few gates between input and output. DE2 Cyclone II 35,000 LEs DE2-115 Cyclone IV 115,000 LEs DE1-Soc Cyclone V Quartus II 13.1 Synth. Times - - - - - - - - - - - - - - Cyclone III EP3c5f256C6 42, 38 sec Cyclone IV EP4ce115f23C7 1:05, 1:04,:1:02 Cyclone V 5csema5f31C6 2:18, 2:23, 2:22 ECE 224 / 324 / 325 Test Eclipse -> New - > NIOS II SBP Template Start -> Nios -> Build Tools for Eclipse, use project directory /software Generate: 6:09, 6:15 with USB drive; on the N: disk 3:38, 2:04 PERFORMANCE ----------- This compares performance using Altera Quartus II 6.0 for compiling a sample ECE 325 processor in VHDL. This is the most CPU intensive application in our PC labs. Compile times are currently around 5 minutes using an Quartus II 3.0 on existing P3's (1.2 to 1.4 GHz). Power Draw ---------- P4 3.0GHz HT System - 200W power draw, noisy fans P4 3.0GHz Celeron Dual System - 110W, noisy fans P4 2.4GHz Celeron - 85W, noisy fans P3 System - 55W power draw Historical power consumption: P2 System - 47W P1 System - 33W 486 System - 25W 386 System - 32W AMD System - 55W power draw most of time, peaking 100W when number crunching Performance ----------- P3-1.2GHz - 2:45 compile time on C: disk (2:50 on N: disk!) P4-2.4GHz Celeron - 1:25 compile time P4-3.0GHz D Celeron - 1:20 compile time (dual cpu Celeron) P4-3.0GHz HT P4 - 1:10 compile time (hyper-threading, quasi dual CPU) AMD Athlon 64 2.2GHz (rated 3.5GHz) - 1:00 compile time AMD Athlon 64 Dual 2GHz (rated 3.8GHz) - 1:00 compile time (dual CPU) System Info: AMD - joined to Nexus, minimal software install P3 - typical Nexus machine, fully loaded with s/w and NAV P4 - not on Nexus, NAV added a 1 sec delay to compile times System Cost: AMD 2GHZ Athlon 64 A8N-VM, $447 AMD 2.2GHZ Dual Athlon 64 A8N-VM, $630 P4P800VM P4 $503 P4P800VM Celeron dual $337 CPU prices +$120 for P4, +$230 for Mobile P4 AMD CPU 2GHz mobile $106, 2GHz Athlon 64 $197, 2.2GHz Dual Athlon 64 $381, July 2006 - NOTE new AM2 processor using DDR2 coming no CSM M/B yet.
To set the baud speed (1200N81) with Centos 5.2 in rc.local add "stty -F /dev/ttyS0 1200"
To set the automatic login (GUI widget doesn't work) edit /etc/gdm/custom.conf
Add the user to the group which holds the serial port to allow access.
There is no password, and auto login, for the default account so disable keyboard locking.
genkey server_name.x.com
Apache rewrite to redirect http to httpsFrom http://davmp.kimanddave.com/2008/03/30/installing-mailman-to-use-https-on-centos-51/
# You.ll need to insert two RewriteRule lines in your httpd config files to redirect all non-https requests for Mailman features to the https site. And if you don.t have any rewrite features setup elsewhere, you.ll need a couple of other lines. You can find out the most about this process by reading the Apache docs for the RewriteEngine here. But, since I.ve already got a virtual host file that represents the config I want to have Mailman show up as a part of, I simply added lines like the following:... RewriteEngine on RewriteCond %{HTTPS} !=on RewriteRule ^/mailman/(.*) https://davmp.kimanddave.com/mailman/$1 [L,R] RewriteRule ^/pipermail/(.*) https://davmp.kimanddave.com/pipermail/$1 [L,R] ... Include "conf.d/mailman.conf.include" And then renamed /etc/httpd/conf.d/mailman.conf to /etc/httpd/conf.d/mailman.conf.include. These settings prevent Apache from allowing these URLs to work for any other virtual hosts.
The onboard raid is Adaptec 7902 software raid. It may support hot swapping of drives available at boot time - but it does not support adding extra drives when booted.
Power draw for these servers is around 220W at boot time and 140W when running Centos 5. A modern quad core Xeon blade server sucks 140W with dual 15k RPM HDs. Both are much higher than comparable AMD systems (typically 45W for a dual core in light use).
I tested the system by setting up RAID 1, installed the OS and pulled one HD. Immediately the OS gave errors about the pulled drive. Booting into the RAID controller software revealed that the RAID array was "optimal"! Reboot with the HD re-inserted and all seems well. I was not able to find how the software raid was being done. /proc/mdstat revealed nothing. When I pulled the one harddrive again the OS pretty well hung. This is symptomatic of software RAID with RedHat.
The raid used revolves around the dmraid commands. "dmraid -l" to list support -r to list the current setup and driver. Then "-s -s asr_raid1array" lists the raid particulars.
The Ultra 320 SCSI HDs are curious. I've seen one where the BIOS, at boot time, flagged it as failing - but the SMART tools said that the health status was fine. The drive had bad sectors but SMART monitoring was not reporting that - only the temperature. I inserted a failing HD and it was not detected at boot time - but the SMART Health Status was failing.
My conclusion is that the RAID on these blade servers is less than useless.
Intel i3 testbed Asus P8H77-M motherboard 4 x 8G RAM, Adaptec 6805e RAID Controller | Size | Sequential Output | Sequential Input | Random Seeks | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Per Char K/sec | %CP | Per Block K/sec | %CP | Rewrite K/Sec | %CP | Per Chr K/sec | %CP | Block | %CP | Random Seeks /sec | %CP | Files | ||
i3 testbed, tmp, 8G RAM ST 80G HD - Nov2014 | 15488M | 113549 | 98 | 192080 | 13 | 137832 | 14 | 137832 | 14 | 113282 | 96 | 722060 | 35 |
Intel i3 testbed Asus P8H77-M motherboard 4 x 8G RAM, Adaptec 6805e RAID Controller | Size | Sequential Output | Sequential Input | Random Seeks | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Per Char K/sec | %CP | Per Block K/sec | %CP | Rewrite K/Sec | %CP | Per Chr K/sec | %CP | Block | %CP | Random Seeks /sec | %CP | Files | ||||||||||||||
Mirrored | ||||||||||||||||||||||||||
WD Green mirror | 63960M | 91861 | 82 | 93702 | 6 | 46769 | 4 | 103267 | 92 | 196994 | 8 | 437.3 | 1 | 16 | ||||||||||||
i3-ssd-5400x1-mirror | 63960M | 111009 | 95 | 114337 | 8 | 71804 | 6 | 110305 | 93 | 599232 | 25 | |||||||||||||||
i3-ssd-5400x1-mirror | 63960M | 106161 | 92 | 107771 | 7 | 68430 | 6 | 110972 | 94 | 602734 | 25 | |||||||||||||||
SSD mirroring 500G 7,200 rpm | 63960M | 115539 | 99 | 130346 | 9 | 78779 | 7 | 110557 | 94 | 600073 | 25 | |||||||||||||||
1TB SSD mirroring 10k rpm Velociraptor | 63960M | 115353 | 99 | 148500 | 10 | 86225 | 7 | 111105 | 94 | 592113 | 25 | |||||||||||||||
i3-ssd-SAS-mirror | 63960M | 115926 | 99 | 206926 | 14 | 100789 | 9 | 113142 | 96 | 586938 | 24 | |||||||||||||||
1TB SSD mirror | 63960M | 115585 | 99 | 408203 | 26 | 203795 | 19 | 116808 | 99 | 830451 | 35 | |||||||||||||||
Striped Across Two Drives | ||||||||||||||||||||||||||
i3 testbed RAID 10 2x1TB SSD, 2x Green WD, 8G RAM - Nov2014 | 15488M | 113549 | 98 | 192080 | 13 | 137832 | 14 | 137832 | 14 | 113282 | 96 | 722060 | 35 | |||||||||||||
i3 testbed RAID 10 2x1TB SSD, 2x Green WD - i3-2SSD-2greenHD, 32G RAM | 63960M | 115840 | 99 | 191006 | 13 | 111572 | 10 | 116488 | 99 | 748610 | 33 | |||||||||||||||
i3-striped7200s | 63960M | 115346 | 99 | 267073 | 18 | 97241 | 8 | 111901 | 95 | 277322 | 12 | 396.4 | 1 | 16 | ||||||||||||
i3-stripedVelociraptors | 63960M | 115523 | 99 | 257582 | 17 | 93084 | 8 | 112779 | 96 | 266946 | 11 | 670.2 | 1 | 16 | ||||||||||||
i3-2SSD-2greenHD | 63960M | 115840 | 99 | 191006 | 13 | 111572 | 10 | 116488 | 99 | 748610 | 33 | |||||||||||||||
i3-ssd-7200-x2-raid10 | 63960M | 116351 | 99 | 266619 | 18 | 116523 | 11 | 115262 | 98 | 444195 | 19 | 878.0 | 2 | 16 | ||||||||||||
i3-ssd-velo-x4-raid10 | 63960M | 115311 | 99 | 260695 | 17 | 123764 | 11 | 114368 | 97 | 464339 | 20 | 1486.9 | 3 | 16 | ||||||||||||
i3-ssd-SAS-x4-raid10 | 63960M | 113218 | 99 | 400418 | 26 | 180754 | 17 | 109283 | 97 | 569758 | 25 | 2124.8 | 4 | 16 | ||||||||||||
i3-striped15kSAS | 63960M | 116491 | 99 | 418576 | 26 | 161056 | 14 | 114495 | 97 | 433179 | 19 | 986.1 | 2 | 16 | ||||||||||||
i3-striped15kSAS | 63960M | 116337 | 99 | 387681 | 25 | 148908 | 13 | 114809 | 97 | 404124 | 17 | 961.7 | 2 | 16 | ||||||||||||
2 1TB SSDs striped | 63960M | 113350 | 99 | 800885 | 50 | 292968 | 28 | 116128 | 98 | 831256 | 35 | |||||||||||||||
Striped Across Three Drives | ||||||||||||||||||||||||||
i3-WDgreen-strip3, no SSDs | 63960M | 115333 | 99 | 285490 | 19 | 108748 | 9 | 113324 | 96 | 294216 | 12 | 452.8 | 1 | 16 | ||||||||||||
i3-SSD-WDGreenRAID10x6 (redone) | 63960M | 115639 | 99 | 285154 | 19 | 167755 | 16 | 114445 | 97 | 741850 | 33 | |||||||||||||||
i3-SSD-WDGreenRAID10x6 (redone again) | 63960M | 115642 | 99 | 279869 | 18 | 165685 | 16 | 114490 | 97 | 743868 | 33 | |||||||||||||||
i3-ssd-7200-x3-raid10 | 63960M | 115427 | 99 | 381174 | 24 | 164407 | 15 | 114611 | 97 | 550458 | 24 | 1333.0 | 3 | 16 | ||||||||||||
3 1TB SSD's striped | 63960M | 113128 | 99 | 905738 | 58 | 333320 | 33 | 111076 | 99 | 840572 | 36 | |||||||||||||||
FreeNAS vs CentOS | Size | Sequential Output | Sequential Input | Random Seeks | Sequential Create | Random Create | ||||||||||||||||||||
Per Char | Per Block | Rewrite | Per Chr | Block | Random Seeks | Files | Create | Read | Delete | Create | Read | Delete | ||||||||||||||
K/sec | %CP | K/sec | %CP | K/Sec | %CP | K/sec | %CP | K/Sec | %CP | /sec | %CP | per sec | %CP | per sec | %CP | per sec | %CP | per sec | %CP | per sec | %CP | per sec | %CP | |||
FreeNAS 4 x 1TB SSD to Client via 10GBe Network, March 2016 | ||||||||||||||||||||||||||
SuperMicro FreeNAS 10Gbe with 4x 1TB SSD to SuperMicro 10Gbe CentOS 7 | 128168M | 182710 | 99 | 1061873 | 29 | 311626 | 15 | 164701 | 97 | 331800 | 8 | 3547.2 | 7 | 16 | 6027 | 11 | +++++ | +++ | 6181 | 12 | 6127 | 12 | +++++ | +++ | 11102 | 9 |
SuperMicro FreeNAS 1Gbe with 4x 1TB SSD to SuperMicro 1Gbe CentOS 7 | 128168M | 11464 | 7 | 10016 | 0 | 7156 | 0 | 14406 | 8 | 14368 | 0 | 1745.3 | 3 | 16 | 1095 | 0 | 28088 | 0 | 1638 | 0 | 1362 | 0 | 4419 | 36 | 2329 | 0 |
SuperMicro FreeNAS 10Gbe i5 server with 3 x 1TB RAIDz, CentOS 6 i3 10Gbe client, FreeNAS10gb | 63848M | 185924 | 99 | 1111280 | 33 | 179262 | 8 | 167769 | 91 | 230131 | 4 | 2867.3 | 1 | 16 | 4571 | 11 | +++++ | +++ | 3761 | 11 | 5170 | 14 | +++++ | +++ | 8361 | 15 |
SuperMicro FreeNAS 100Mb with 4x 1TB SSD to SuperMicro 100Mb CentOS 7 | 128168M | 11468 | 14 | 11467 | 1 | 6901 | 1 | 12404 | 24 | 13175 | 1 | 1070.0 | 6 | 16 | 493 | 11 | 17779 | 9 | 677 | 9 | 499 | 12 | 1042 | 32 | 1005 | 0 |
AMD 8-core FX to 1Gb/s eceSERV RAID 10 3 of SSD and 3 of HD | 63840M | 63975 | 96 | 96058 | 7 | 45568 | 10 | 65800 | 99 | 100507 | 10 | 1960.0 | 7 | 16 | 1113 | 3 | 2531 | 6 | 1142 | 6 | 1106 | 4 | 2792 | 5 | 1382 | 5 |
eceLinux1-InfiniHost-to-i3-raid10-ssd-10krpm-Serv-NFSoRDMA | 128304M | 157857 | 79 | 105521 | 6 | 136272 | 12 | 183754 | 99 | 608778 | 17 | 3117.1 | 11 | 16 | 3348 | 5 | +++++ | +++ | 5620 | 5 | 3574 | 6 | 20763 | 9 | 6845 | 6 |
eceLinux2-to-i3-raid10-ssd-10krpm-eceServ-NFSoRDMA | 128304M | 148413 | 75 | 86777 | 5 | 136715 | 14 | 185525 | 99 | 612089 | 39 | 2772.8 | 13 | 16 | 9577 | 10 | +++++ | +++ | 15481 | 11 | 8249 | 10 | +++++ | +++ | 12548 | 13 |
Setup HP ConnectX-2 cards connected to a Voltaire 4036 switch at QDR speeds with Mellanox cables. CentOS 6 on the NFS file server - using Datagram or Connected mode to share files to CentOS 6 or CentOS 7 clients. All CentOS machines use the stock InfiniBand support. NFS sharing using IPoIB or RDMA.
Symptom Large text files (18M and 160M) with 2 columns of numbers get corrupted in a repeatable pattern. Using RDMA results in might higher corruption (7772 vs 960 lines corrupted). No errors are reported by the OS or switch. Using ibqueryerrors and diffing it before and after reading a corrupted file shows no change in the errors.
These tests were done using cheap i3 and AMD based systems running CentOS 7. InfiniHost III DDR cards were used with a SDR (10Gb/s clock or 8Gb/s actual switch). Direct connected ConnectX-2 cards were used for the QDR tests. The client and server booted from 7200 rpm 500G HDs. For performance tests a Samsung EVO 850 500G SSD was used and it's performance maxes out at about 10Gb/s. However, the latency of the SSD is much lower than that of 10Gb ethernet.
A very easy test to run is fio on the links. The command options used were:
fio --rw=randread --bs=64k --numjobs=4 --iodepth=8 --runtime=30 --time_based --loops=1 --ioengine=libaio --direct=1 --invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap --exitall --name task1 --filename=/testing/junk/1.txt --size=10000000
Network | GB Data in 30 sec | Aggregate Bandwidth (MB/s, Gb/s) | Bandwidth (Mb/s, MB/s) | latency (ms) | iops |
---|---|---|---|---|---|
QDR IB 40Gb/s NFS over RDMA | 94 | 3,100, 25 | 802, 6.4 | 0.615 | 12,535 |
DDR IB 20Gb/s NFS over RDMA | 24.4 | 834, 6.7 | 208, 1.7 | 2.4 | 3256 |
SDR IB 10Gb/s NFS over RDMA | 22.3 | 762, 6.1 | 190, 1.5 | 2.57 | 2978 |
QDR IB 40Gb/s | 16.7 | 568, 4.5 | 142, 1.1 | 3.4 | 2218 |
DDR IB 20Gb/s | 13.9 | 473, 3.8 | 118, 0.94 | 4.1 | 1845 |
SDR IB 10Gb/s | 13.8 | 470, 3.8 | 117, 0.94 | 4.2 | 1840 |
10Gb/s ethernet | 5.9 | 202, 1.6 | 51, 0.41 | 9.7 | 793 |
1Gb/s ethernet | 3.2 | 112, 0.90 | 28 | 17.8 | 438 |
100Mb/s ethernet | 346MB | 11.5 | 2.9 | 174 | 45 |
10Mb/s ethernet via switch | 36MB | 1.2 | 279kB/s | 1797 | 4 |
10Mb/s ethernet via hub | 33MB | 1.0 | 260kB/s | 1920 | 4 |
NOTE: It is clear from the NFS over RDMA data above that the ConnectX-2 card (QDR or 40Gb/s) has signif. better performance than the older InfiniHost III cards run at SDR or DDR speeds
Ethernet vs InfiniBand | Size | Sequential Output | Sequential Input | Random Seeks | Sequential Create | Random Create | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Per Char | Per Block | Rewrite | Per Chr | Block | Random Seeks | Files | Create | Read | Delete | Create | Read | Delete | ||||||||||||||
K/sec | %CP | K/sec | %CP | K/Sec | %CP | K/sec | %CP | K/Sec | %CP | /sec | %CP | per sec | %CP | per sec | %CP | per sec | %CP | per sec | %CP | per sec | %CP | per sec | %CP | |||
NFSoIB InfiniBand Tests using CentOS 7 | ||||||||||||||||||||||||||
TEST TO LOCAL 7200rpm HD on client?? NFS1gB-to-ssd | 128128M | 93253 | 52 | 78956 | 3 | 39927 | 2 | 104771 | 61 | 126355 | 3 | 146.6 | 0 | 16 | 116 | 2 | 32004 | 28 | 115 | 1 | 114 | 2 | 2241 | 23 | 115 | 1 |
40Gb/s InfiniBand QDR-IB-NFS to a 7200rpm HD | 128128M | 80589 | 54 | 73291 | 7 | 40351 | 7 | 97960 | 61 | 113744 | 8 | 140.2 | 0 | 16 | 115 | 2 | +++++ | +++ | 115 | 1 | 113 | 2 | +++++ | +++ | 114 | 1 |
1Gb/s ethernet to SSD on server | 128128M | 111587 | 59 | 114273 | 2 | 88361 | 3 | 123485 | 71 | 147153 | 2 | 7349.8 | 8 | 16 | 1855 | 8 | +++++ | +++ | 4340 | 5 | 1975 | 7 | 2327 | 10 | 5243 | 3 |
IB-SDR-to-ssd | 128128M | 172504 | 98 | 372110 | 15 | 205330 | 14 | 158240 | 99 | 702011 | 23 | 8275.7 | 33 | 16 | 4580 | 14 | +++++ | +++ | 8401 | 17 | 4461 | 16 | +++++ | +++ | 19318 | 14 |
40Gb/s InfiniBand QDR-IB-NFS to an SSD | 128128M | 185401 | 98 | 507588 | 31 | 241786 | 17 | 168968 | 99 | 702065 | 22 | 12098.4 | 18 | 16 | 6631 | 15 | +++++ | +++ | 9521 | 20 | 6466 | 18 | +++++ | +++ | 21849 | 18 |
IB-QDR-NFSoRDMA-to-ssd | 128128M | 183721 | 98 | 505062 | 33 | 251227 | 24 | 175136 | 99 | 704626 | 45 | 9183.0 | 45 | 16 | 10997 | 15 | +++++ | +++ | 17611 | 18 | 10897 | 18 | +++++ | +++ | 28937 | 17 |
SSD tested on the server | 23G | 116827 | 98 | 551325 | 35 | 240756 | 22 | 110494 | 99 | 666847 | 28 | +++++ | +++ | 16 | +++++ | +++ | +++++ | +++ | +++++ | +++ | +++++ | +++ | +++++ | +++ | +++++ | +++ |
striped-2-SSDs-speed-on-server | 23G | 119803 | 99 | 680627 | 39 | 301202 | 25 | 111599 | 98 | 873434 | 23 | +++++ | +++ | 16 | +++++ | +++ | +++++ | +++ | +++++ | +++ | +++++ | +++ | +++++ | +++ | +++++ | +++ |
IB-QDR-NFSoRDMA to 2 striped ssds | 128128M | 187368 | 99 | 592361 | 36 | 326021 | 21 | 176537 | 99 | 1081013 | 56 | 9373.2 | 38 | 16 | 14954 | 16 | +++++ | +++ | 20493 | 17 | 14270 | 17 | +++++ | +++ | 28693 | 16 |
IB-DDR-NFSoRDMA-to-RAIDz-3-ssd | 128128M | 184130 | 99 | 681531 | 38 | 334695 | 35 | 173072 | 99 | 993987 | 29 | 5214.1 | 10 | 16 | 1756 | 6 | +++++ | +++ | 1462 | 11 | 1797 | 6 | 19544 | 10 | 1549 | 11 |
IB-QDR-NFSoRDMA-to-RAIDz-3-ssd | 128128M | 184594 | 99 | 688436 | 37 | 339845 | 34 | 175052 | 99 | 1018267 | 46 | 4801.2 | 25 | 16 | 1351 | 19 | +++++ | +++ | 1237 | 17 | 1354 | 15 | +++++ | +++ | 1436 | 15 |
ZFS-RAIDz-3-ssds-on-server | 23G | 126824 | 99 | 766040 | 79 | 352183 | 53 | 111284 | 99 | 815423 | 41 | 5484.1 | 18 | 16 | +++++ | +++ | +++++ | +++ | +++++ | +++ | 29821 | 99 | +++++ | +++ | +++++ | +++ |
btrfs-3-ssds-on-server | 23G | 123233 | 99 | 416982 | 14 | 170231 | 14 | 109029 | 98 | 427480 | 16 | +++++ | +++ | 16 | +++++ | +++ | +++++ | +++ | +++++ | +++ | +++++ | +++ | +++++ | +++ | +++++ | +++ |
IB-QDR-NFSoRDMA-to-BTRfs-3-ssd | 128128M | 182386 | 96 | 368255 | 26 | 205813 | 24 | 174850 | 99 | 532123 | 34 | 6632.5 | 32 | 16 | 14425 | 16 | +++++ | +++ | 21168 | 14 | 14811 | 17 | +++++ | +++ | 20127 | 12 |
Ethernet vs InfiniBand Terse Summary | Sequential Output | Sequential Input | Random Seeks | Sequential Create | Random Create | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Per Char | Per Block | Rewrite | Per Chr | Block | Create | Read | Delete | Create | Read | Delete | ||
M/sec | M/sec | M/Sec | M/sec | M/Sec | thousands /sec | per sec | per sec | per sec | per sec | per sec | per sec | |
Bonnie++ tests of a client to a server | ||||||||||||
TEST TO LOCAL 7200rpm HD on client?? NFS1gB-to-ssd | 93 | 79 | 40 | 105 | 126 | 0.15 | 116 | 32004 | 115 | 114 | 2241 | 115 |
40Gb/s InfiniBand QDR-IB-NFS to a 7200rpm HD | 81 | 73 | 40 | 98 | 114 | 0.14 | 115 | +++++ | 115 | 113 | +++++ | 114 |
1Gb/s ethernet to SSD on server | 112 | 114 | 88 | 123 | 147 | 7.3 | 1855 | +++++ | 4340 | 1975 | 2327 | 5243 |
10Gb/s InfiniBand SDR to an SSD | 173 | 372 | 205 | 158 | 702 | 8.3 | 4580 | +++++ | 8401 | 4461 | +++++ | 19318 |
40Gb/s InfiniBand QDR-IB-NFS to an SSD | 185 | 508 | 242 | 169 | 702 | 12 | 6631 | +++++ | 9521 | 6466 | +++++ | 21849 |
SSD tested on the server | 117 | 551 | 241 | 110 | 667 | +++ | +++++ | +++++ | +++++ | +++++ | +++++ | +++++ |
Adaptec 6805e RAID 2 striped SSDs on the server | 120 | 681 | 301 | 112 | 873 | +++++ | +++++ | +++++ | +++++ | +++++ | +++++ | +++++ |
ZFS-RAIDz-3-ssds-on-server | 127 | 766 | 352 | 111 | 815 | 5.5 | +++++ | +++++ | +++++ | 29821 | +++++ | +++++ |
btrfs-3-ssds-on-server | 123 | 417 | 170 | 109 | 427 | +++++ | +++++ | +++++ | +++++ | +++++ | +++++ | +++++ |
IB-QDR-NFSoRDMA-to-BTRfs-3-ssd | 182 | 368 | 206 | 175 | 532 | 6.6 | 14425 | +++++ | 21168 | 14811 | +++++ | 20127 |
40Gb/s InfiniBand QDR NFS over RDM to SSD | 184 | 505 | 251 | 175 | 705 | 9.2 | 10997 | +++++ | 17611 | 10897 | +++++ | 28937 |
40Gb/s InfiniBand QDR NFS over RDMA to 2 striped ssds | 187 | 592 | 326 | 177 | 1081 | 9.4 | 14954 | +++++ | 20493 | 14270 | +++++ | 28693 |
IB-DDR-NFSoRDMA-to-RAIDz-3-ssd | 184 | 682 | 335 | 173 | 994 | 5.2 | 1756 | +++++ | 462 | 1797 | 19544 | 1549 |
IB-QDR-NFSoRDMA-to-RAIDz-3-ssd | 185 | 688 | 340 | 175 | 1018 | 4.8 | 1351 | +++++ | 1237 | 1354 | +++++ | 1436 |
FreeNAS 3x1TB SSD RAIDz 10Gb/s ethernet to client | 186 | 1111 | 179 | 168 | 230 | 2.9 | 4571 | +++++ | 3761 | 5170 | +++++ | 8361 |
AMD 8-core FX to 1Gb/s AMD based eceSERV RAID 10 3 of SSD and 3 of HD | 64.0 | 96.1 | 45.6 | 66 | 101 | 1.96 | 1113 | 2531 | 1142 | 1106 | 2792 | 1382 |
eceLinux1-InfiniHost-to-i3-raid10-ssd-10krpm-Serv-NFSoRDMA | 158 | 106 | 136 | 184 | 609 | 3.1 | 3348 | +++++ | 5620 | 3574 | 20763 | 6845 |
eceLinux2-ConnectX2-to-i3-raid10-ssd-10krpm-Serv-NFSoRDMA | 148 | 87 | 137 | 186 | 612 | 2.8 | 9577 | +++++ | 15481 | 8249 | +++++ | 12548 |
Before deploying a new file server I performed some performance tests after finding Quartus compiles to be slower than expected.
2 simultaneous compiles took 20 secs on the old file server (hybrid raid 10 with 3 of 1TB SSDs and 3 10k rpm HDs) with hardware RAID and 33 seconds on the ZFSonLinux server (4 of 1TB SSDs in RAIDz. Tests of 1, 4 or 8 compiles at a time were similarily affected.
eceServ has a messed up RAID 10 array. It has 1 SSD in parallel with an SSD, 1 HD with a SSD in parallel and two HDs in parallel so performance is suboptimal from the original configuration - 3 SSDs with 3 HDs in parellel.
fio test indicates that the network connection to old and new file servers is exactly the same (they have same switches and routers in the path and same motherboard and DDR4 RAM and model of SSDs ...
The first 3 tests involving mounting the file system on a client machine with 3 different methods and the last test is a bonnie++ test of the file system on the server itself.
The file system is 4 of 1TB Samsung 850 EVO SSDs in RAIDz with a pair of HDs on a raid controller for booting.
All tests done with kernel kernel-3.10.0-327.28.2.el7.x86_64 except as noted below.
Ethernet vs InfiniBand Terse Summary | Sequential Output | Sequential Input | Random Seeks | Sequential Create | Random Create | Quartus Compile | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Per Char | Per Block | Rewrite | Per Chr | Block | Create | Read | Delete | Create | Read | Delete | |||
M/sec | M/sec | M/Sec | M/sec | M/Sec | thousands /sec | per sec | per sec | per sec | per sec | per sec | per sec | ||
ecelinux3- use FreeNAS 4TB HD RAIDz array | 29 sec | ||||||||||||
ecelinux3- use local-ssd | 140 | 307 | 193 | 126 | 662 | ++ | +++++ | +++++ | +++++ | +++++ | +++++ | +++++ | 21 sec |
1Gb/s ethernet, eceLinux3-to-servNew 4 ssd RAIDz lz4 | 108 | 106 | 38 | 110 | 140 | 3.0 | 151 | +++++ | 155 | 151 | 5286 | 162 | 33 sec |
1Gb/s eth, ecelinux3-to-servNew-ssd-single as ZFS vol no lz4 | 101 | 99 | 37 | 109 | 144 | 2.9 | 178 | +++++ | 184 | 176 | 5343 | 190 | 32 sec with lz4 |
1Gb/s eth, ecelinux3-to-servNew-ssd as ext4-1Gbe | 98 | 105 | 35 | 97 | 102 | 2.5 | 1046 | +++++ | 3203 | 1309 | 5936 | 4792 | 32 sec |
1Gb/s ecelinux3-to-serv hybrid raid 10 | 108 | 109 | 50 | 113 | 140 | 9.4 | 1456 | 32676 | 2577 | 1424 | 4363 | 2993 | 23 sec |
1Gb/s ethernet ecelinux3-to-servNew-tmp-directory-1Gbe - regular HD ext4 | 89 | 86 | 32 | 115 | 136 | 2.8 | 743 | +++++ | 1075 | 608 | 5299 | 994 | |
QDR InfiniBand, eceLinux3-to-servNew-IB 4 ssd RAIDz lz4 | 141 | 1597 | 847 | 128 | 2186 | 3.8 | 158 | +++++ | 162 | 157 | 20411 | 167 | |
QDR RDMA IB, eceLinux3-to-servNew-RDMA 4 ssd RAIDz | 142 | 1968 | 1042 | 129 | 3456 | 4.0 | 167 | +++++ | 169 | 165 | 24618 | 172 | 37 sec |
On the server eceServ-RAID10x6-hybridSSD-HD (128G blocks) | 188 | 357 | 178 | 167 | 413 | 2.3 | +++++ | +++++ | +++++ | +++++ | +++++ | +++++ | |
On the server (128G blocks), eceserv-new.uwaterloo.ca | 175 | 666 | 538 | 168 | 2596 | +++++ | 24567 | +++++ | 25537 | 21229 | +++++ | 29492 | |
ecelinux3-to-servNew-ssd-zfs-rdma kernel-3.10.0-327.28.3.el7.x86_64 | 142 | 2012 | 1114 | 130 | 3225 | 6.6 | 351 | +++++ | 338 | 319 | 25725 | 350 | 27 sec |
ecelinux3-to-servNew-home-zfs-rdma kernel-3.10.0-327.28.3.el7.x86_64 | 143 | 1877 | 1167 | 128 | 3263 | 6.4 | 318 | +++++ | 311 | 298 | 25953 | 318 | 28 sec |
Tests - Dec 2016 - ZFSonLinux server setup using ashift=12 4 x 1TB Samsung SSDs eceLinux1 to eceServ1 using QDR RDMA read : io=12015MB, bw=410086KB/s, iops=6407, runt= 30001msec slat (usec): min=4, max=213, avg=10.07, stdev= 6.41 clat (usec): min=411, max=2666, avg=1210.97, stdev=193.85 lat (usec): min=489, max=2673, avg=1221.10, stdev=193.78 READ: io=48062MB, aggrb=1602.2MB/s, minb=409912KB/s, maxb=410318KB/s, mint=30000msec, maxt=30001msec Test #2 read : io=12083MB, bw=412404KB/s, iops=6443, runt= 30001msec slat (usec): min=4, max=944, avg= 9.95, stdev= 6.71 clat (usec): min=377, max=3887, avg=1204.72, stdev=200.28 lat (usec): min=391, max=3897, avg=1214.72, stdev=200.27 READ: io=48335MB, aggrb=1611.1MB/s, minb=412280KB/s, maxb=412607KB/s, mint=30001msec, maxt=30001msec older fio tests - 1Gb/s bandwidth to both servers is the same speed (same switch & router - so duhh) fio test to eceServ-new ssd setup as ZFS with lz4 read : io=841024KB, bw=28026KB/s, iops=437, runt= 30009msec slat (usec): min=9, max=167, avg=64.01, stdev=19.53 clat (msec): min=1, max=38, avg=18.00, stdev= 2.80 lat (msec): min=1, max=38, avg=18.06, stdev= 2.81 fio test to eceServ-new ssd setup as ZFS with lz4 using RDMA read : io=18999MB, bw=648478KB/s, iops=10132, runt= 30001msec slat (usec): min=7, max=249, avg=11.05, stdev= 2.03 clat (usec): min=149, max=10420, avg=742.97, stdev=192.64 lat (usec): min=163, max=10430, avg=754.14, stdev=192.78 fio test to eceServ hybrid RAID 10 array read : io=844160KB, bw=28131KB/s, iops=439, runt= 30008msec slat (usec): min=10, max=356, avg=68.07, stdev=19.51 clat (msec): min=4, max=30, avg=17.79, stdev= 2.02 lat (msec): min=4, max=30, avg=17.86, stdev= 2.02 fio test to FreeNAS backup server using 4TB HDs in RAIDz read : io=750464KB, bw=25000KB/s, iops=390, runt= 30019msec slat (usec): min=11, max=238, avg=65.48, stdev=19.84 clat (msec): min=5, max=37, avg=19.97, stdev= 2.58 lat (msec): min=5, max=37, avg=20.04, stdev= 2.58 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ecelinux3-to 63776M 143043 99 1876643 53 1167472 56 127623 99 3263335 62 6431 8 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 318 3 +++++ +++ 311 2 298 3 25953 39 318 1 ecelinux3-to-servNew-home-zfs-rdma new kernel,63776M,143043,99,1876643,53,1167472,56,127623,99,3263335,62,6431.5,8,16,318,3,+++++,+++,311,2,298,3,25953,39,318,1 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ecelinux3-to 63776M 142080 99 2011716 57 1114007 56 129575 99 3224767 61 6573 9 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 351 4 +++++ +++ 338 2 319 3 25725 41 350 2 ecelinux3-to-servNew-ssd-zfs-rdma new kernel,63776M,142080,99,2011716,57,1114007,56,129575,99,3224767,61,6573.3,9,16,351,4,+++++,+++,338,2,319,3,25725,41,350,2 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ecelinux3-to 63776M 140262 99 307274 15 192950 17 125994 99 662069 29 +++++ +++ ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ ecelinux3-to-local-ssd,63776M,140262,99,307274,15,192950,17,125994,99,662069,29,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ecelinux3-to 63776M 100736 74 99212 4 37053 4 108801 87 143548 6 2880 7 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 178 1 +++++ +++ 184 1 176 1 5343 16 190 0 ecelinux3-to-servNew-ssd-single-as-ZFSvol-1Gbe,63776M,100736,74,99212,4,37053,4,108801,87,143548,6,2879.7,7,16,178,1,+++++,+++,184,1,176,1,5343,16,190,0 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ecelinux3-to 63776M 98328 74 105344 5 34592 3 96947 80 101594 4 2519 4 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1046 9 +++++ +++ 3203 14 1309 12 5936 17 4792 17 SSD as ext4 partition ecelinux3-to-servNew-ssd-ext4-1Gbe,63776M,98328,74,105344,5,34592,3,96947,80,101594,4,2519.2,4,16,1046,9,+++++,+++,3203,14,1309,12,5936,17,4792,17 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ecelinux3-to 63776M 89070 67 86473 4 31980 3 114609 90 135646 6 2821 4 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 743 6 +++++ +++ 1075 6 608 6 5299 16 994 5 ecelinux3-to-servNew-tmp-directory-1Gbe,63776M,89070,67,86473,4,31980,3,114609,90,135646,6,2821.0,4,16,743,6,+++++,+++,1075,6,608,6,5299,16,994,5 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceServ-RAI 128448M 188410 99 356675 13 177819 12 167380 89 412860 18 2257 4 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ eceServ-RAID10x6-hybridSSD-HD,128448M,188410,99,356675,13,177819,12,167380,89,412860,18,2256.7,4,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceLinux3-to 63776M 107629 79 106149 5 38316 4 109592 87 140060 6 2973 6 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 151 1 +++++ +++ 155 1 151 1 5286 16 162 0 eceLinux3-to-servNew,63776M,107629,79,106149,5,38316,4,109592,87,140060,6,2973.0,6,16,151,1,+++++,+++,155,1,151,1,5286,16,162,0 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceLinux3-to 63776M 142215 99 1967963 54 1042398 50 128675 99 3455573 66 4030 5 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 167 1 +++++ +++ 169 1 165 1 24618 39 172 0 eceLinux3-to-servNew-RDMA,63776M,142215,99,1967963,54,1042398,50,128675,99,3455573,66,4029.8,5,16,167,1,+++++,+++,169,1,165,1,24618,39,172,0 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceLinux3-to 63776M 140834 99 1596642 57 846652 45 128086 99 2186268 45 3776 5 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 158 2 +++++ +++ 162 1 157 2 20411 40 167 1 eceLinux3-to-servNew-IB,63776M,140834,99,1596642,57,846652,45,128086,99,2186268,45,3776.0,5,16,158,2,+++++,+++,162,1,157,2,20411,40,167,1 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceserv-new 128168M 175209 98 666440 97 538202 94 167666 98 2596462 87 +++++ +++ ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 24567 99 +++++ +++ 25537 99 21229 100 +++++ +++ 29492 100 eceserv-new.uwaterloo.ca,128168M,175209,98,666440,97,538202,94,167666,98,2596462,87,+++++,+++,16,24567,99,+++++,+++,25537,99,21229,100,+++++,+++,29492,100 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ecelinux3-to 63776M 107826 79 108519 5 49646 5 113269 89 139719 6 9374 13 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1456 11 32676 14 2577 12 1424 11 4363 14 2993 12 ecelinux3-to-serv,63776M,107826,79,108519,5,49646,5,113269,89,139719,6,9374.2,13,16,1456,11,32676,14,2577,12,1424,11,4363,14,2993,12
Server | Size | Sequential Output | Sequential Input | Random Seeks | Sequential Create | Random Create | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Per Char | Per Block | Rewrite | Per Chr | Block | Random Seeks | Files | Create | Read | Delete | Create | Read | Delete | ||||||||||||||
K/sec | %CP | K/sec | %CP | K/Sec | %CP | K/sec | %CP | K/Sec | %CP | /sec | %CP | per sec | %CP | per sec | %CP | per sec | %CP | per sec | %CP | per sec | %CP | per sec | %CP | |||
eceServBoot RAID0 600G SAS and 500G 7200rpm - Nov2014 | 63704M | 82065 | 80 | 73747 | 9 | 38839 | 6 | 97081 | 92 | 197248 | 22 | 761.3 | 5 | 16 | ||||||||||||
eceServ Home 6xSAS 15k rpm RAID 10 - Nov2014 | 63704M | 103424 | 98 | 561295 | 52 | 179538 | 28 | 100067 | 96 | 463026 | 54 | 1559.6 | 10 | 16 | ||||||||||||
eceServ RAID10 3 x SSD 3xVelociraptor - Dec 2014 | 63704M | 104797 | 99 | 554304 | 55 | 217355 | 36 | 107862 | 98 | 529560 | 65 | 2390.3 | 15 | 16 | ||||||||||||
eceServHome-1SSD - Nov 2014 | 63704M | 102237 | 98 | 551985 | 52 | 194237 | 29 | 102676 | 97 | 518187 | 62 | 2003.3 | 13 | 16 | ||||||||||||
ecewo3 SSDx1 - Nov2014 | 62G | 53191 | 85 | 109845 | 23 | 82854 | 13 | 63523 | 96 | 543786 | 58 | |||||||||||||||
ecewo32SSD | 31976M | 58493 | 81 | 60873 | 20 | 31223 | 21 | 57524 | 90 | 77272 | 15 | 189.3 | 1 | 16 | ||||||||||||
ecewo3-WOdb-NFS-share - Nov2014 | 62G | 55190 | 97 | 59859 | 9 | 22796 | 5 | 39248 | 56 | 75772 | 7 | 2865.9 | 14 | 16 | 1016 | 9 | 22628 | 31 | 884 | 9 | 853 | 9 | 3555 | 15 | 628 | 5 |
eceLinux3-NFS-serv - Nov2014 | 63840M | 45897 | 75 | 46221 | 5 | 29728 | 7 | 61443 | 98 | 71283 | 8 | 846.0 | 3 | 16 | 371 | 2 | 765 | 3 | 375 | 2 | 360 | 2 | 776 | 2 | 393 | 2 |
eceLinux2-NFS-ServSSDs - Dec 2014 | 63840M | 79281 | 95 | 94125 | 7 | 46257 | 10 | 83142 | 99 | 108043 | 12 | 1942.7 | 7 | 16 | 1111 | 3 | 2548 | 7 | 1153 | 4 | 1107 | 6 | 2769 | 6 | 1369 | 4 |
eceLinux3-NFS-serv | 63840M | 78349 | 94 | 93492 | 7 | 45249 | 10 | 83076 | 99 | 110195 | 12 | 1279.5 | 5 | 16 | 1122 | 4 | 2531 | 6 | 1143 | 4 | 1098 | 6 | 2795 | 6 | 1366 | 3 |
eceLinux3-NFS-Serv-1SSD | 63840M | 61707 | 94 | 97820 | 9 | 44910 | 10 | 62830 | 99 | 104103 | 10 | 1606.1 | 7 | 16 | 911 | 3 | 1911 | 4 | 933 | 1 | 910 | 4 | 2074 | 20 | 1019 | 1 |
eceLinux5-NFS-ServSSDs - Dec 2014 | 63G | 93304 | 79 | 95393 | 12 | 45286 | 15 | 104008 | 95 | 108949 | 22 | 1949.4 | 4 | 16 | 1146 | 3 | 2577 | 4 | 1182 | 4 | 1142 | 4 | 2782 | 3 | 1428 | 2 |
eceLinux5-NFS-serv | 63G | 87956 | 86 | 88906 | 14 | 42920 | 13 | 98124 | 94 | 103878 | 17 | 1288.9 | 6 | 16 | 1106 | 4 | 2569 | 3 | 1145 | 3 | 1088 | 4 | 2791 | 4 | 1361 | 3 |
eceLinux5-NFS-serv1SSD | 63G | 85018 | 88 | 88658 | 15 | 43126 | 13 | 99133 | 99 | 105891 | 18 | 1625.2 | 6 | 16 | 1099 | 5 | 2572 | 5 | 1113 | 5 | 962 | 5 | 2772 | 3 | 1265 | 3 |
eceLinux4-NFS-ServSSDs - Dec 2014 | 15G | 76250 | 86 | 105731 | 8 | 52504 | 14 | 84011 | 99 | 145489 | 15 | 6006.4 | 26 | 16 | 851 | 8 | 14091 | 24 | 1004 | 7 | 858 | 8 | 2361 | 9 | 1004 | 7 |
eceLinux4-NFS-serv | 15G | 79807 | 89 | 96881 | 7 | 50348 | 14 | 80505 | 99 | 144033 | 15 | 4901.9 | 23 | 16 | 866 | 7 | 13994 | 17 | 1003 | 7 | 879 | 7 | 2338 | 9 | 1000 | 7 |
eceLinux4-NFS-Serv-1SSD | 15G | 76019 | 87 | 88916 | 7 | 50145 | 14 | 83809 | 98 | 131553 | 16 | 4608.5 | 22 | 16 | 863 | 7 | 14247 | 17 | 1000 | 8 | 871 | 8 | 2316 | 11 | 996 | 8 |
Tests of InfiniBand using ConnectX-2 cards directly connecting 2 computers. fio --rw=randread --bs=64k --numjobs=4 --iodepth=8 --runtime=30 --time_based --loops=1 --ioengine=libaio --direct=1 --invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap --exitall --name task1 --filename=/testing/junk/1.txt --size=10000000 Testing local file system (7200 rpm HD) READ: io=8899.3MB, aggrb=303 680KB/s, minb=75 866KB/s, maxb=76014KB/s, mint=30003msec, maxt=30008msec or 8.9GB, 304MB/s, 75.9 MB/s, 6.7ms lat, 1187 iops read : io=2227.6MB, bw=76014KB/s, iops=1187, runt= 30007msec slat (usec): min=2, max=227, avg= 6.72, stdev= 7.00 clat (usec): min=224, max=100657, avg=6680.40, stdev=2089.69 lat (usec): min=229, max=100720, avg=6687.19, stdev=2090.73 NFS over RDMA to a striped pair of SSDs READ: io=94053MB, aggrb=3134.2MB/s, minb=802537KB/s, maxb=802575KB/s, mint=30001msec, maxt=30001msec or 94G, 3.1G/s, 803MB/s, lat 614us, 12539 iops read : io=23513MB, bw=802538KB/s, iops=12539, runt= 30001msec slat (usec): min=3, max=1146, avg= 7.57, stdev= 3.28 clat (usec): min=79, max=1876, avg=606.19, stdev=90.78 lat (usec): min=92, max=2045, avg=613.80, stdev=90.77 NFS mount with QDR ConnectX-2 cards: USING NFSoRDMA!! READ: io=94050MB, aggrb=3134.1MB/s, minb=802 245KB/s, maxb=803017KB/s, mint=30001msec, maxt=30001msec or 94GB, 3.1GB/s, 802MB/s, 615us, 12535 iops read : io=23504MB, bw=802245KB/s, iops=12535, runt= 30001msec slat (usec): min=4, max=474, avg= 7.43, stdev= 1.85 clat (usec): min=72, max=8837, avg=607.94, stdev=113.73 lat (usec): min=81, max=8848, avg=615.40, stdev=113.59 NFS mounted via RDMA using SDR or 10Gb/s with InfiniHost III cards READ: io=22319MB, aggrb=761758KB/s, minb=190363KB/s, maxb=190654KB/s, mint=30002msec, maxt=30003msec or 762MB/s, 190MB/s, 2.57ms lat, 2978 iops read : io=5586.2MB, bw=190654KB/s, iops=2978, runt= 30003msec slat (usec): min=4, max=835, avg= 7.00, stdev= 3.83 clat (usec): min=328, max=8411, avg=2559.16, stdev=486.51 lat (usec): min=336, max=8418, avg=2566.22, stdev=486.45 NFS mounted via RDMA using DDR or 20Gb/s with InfiniHost III cards READ: io=24430MB, aggrb=833785KB/s, minb=208410KB/s, maxb=208492KB/s, mint=30001msec, maxt=30003msec or 24.4G, 834MB/s, 208MB/s, 2.4ms, 3256 iops read : io=6107.2MB, bw=208436KB/s, iops=3256, runt= 30003msec slat (usec): min=3, max=617, avg= 7.00, stdev= 3.50 clat (usec): min=684, max=6448, avg=2355.94, stdev=296.73 lat (usec): min=695, max=6454, avg=2362.98, stdev=296.70 NFS mount over QDR ConnectX-2 cards: NFSoIB Run status group 0 (all jobs): READ: io=15 707MB, aggrb=536 085KB/s, minb=133 964KB/s, maxb=134 139KB/s or 15.7GB, 536MB/s, 134MB/s Testing with DDR InfiniHost III cards connected by SDR switch READ: io=13 788MB, aggrb=470 564KB/s, minb=117 607KB/s, maxb=117 725KB/s, mint=30004msec, maxt=30005msec or 13.8GB, 470MB/s, 117MB/s, 4.2ms, 1840 iops DDR at SDR speed read : io=3449.5MB, bw=117725KB/s, iops=1839, runt= 30004msec slat (usec): min=3, max=861, avg=11.37, stdev= 8.48 clat (usec): min=244, max=9225, avg=4154, stdev=896 lat (usec): min=261, max=9237, avg=4165, stdev=897 Testing DDR InfiniHost III connected directly rate: 20 Gb/sec (4X DDR) READ: io=13854MB, aggrb=472819KB/s, minb=118104KB/s, maxb=118301KB/s, mint=30003msec, maxt=30005msec or 13.9GB, 473GB/s, 118MB/s, lat 4.1ms, 1845iops DDR read : io=3460.6MB, bw=118105KB/s, iops=1845, runt= 30004msec slat (usec): min=4, max=1294, avg=11.57, stdev=11.96 clat (usec): min=1204, max=9557, avg=4131.76, stdev=750.82 lat (usec): min=1361, max=9566, avg=4143.40, stdev=751.04 fio done on the local file system to 7200 rpm HD READ: io=8626.6MB, aggrb=294382KB/s, minb=73340KB/s, maxb=74071KB/s, mint=30002msec, maxt=30007msec read : io=2156.2MB, bw=73579KB/s, iops=1149, runt= 30007msec slat (usec): min=3, max=5737, avg=11.90, stdev=32.52 clat (usec): min=423, max=173382, avg=6898.98, stdev=3536.77 lat (usec): min=435, max=173416, avg=6911.03, stdev=3536.91 fio done over QDR IB READ: io=16 636MB, aggrb=567778KB/s, minb=141899KB/s, maxb=141999KB/s, mint=30002msec, maxt=30003msec or 16.7GB, 568 MB/s, 142MB/s, 3.4ms lat, 2218 iops QDR read : io=4160.6MB, bw=142000KB/s, iops=2218, runt= 30003msec slat (usec): min=4, max=1216, avg=11.92, stdev=11.06 clat (usec): min=56, max=170945, avg=3433.91, stdev=2056.65 lat (usec): min=266, max=171043, avg=3445.91, stdev=2058.37 Testing with 1Gb ethernet between machines READ: io=3 289.7MB, aggrb=112 210KB/s, minb=28 048KB/s, maxb=28067KB/s, mint=30003msec, maxt=30015msec or 3.2GB, 112MB/s, 28MB/s, 17.8ms, 438 iops 1Gb eth read : io=842048KB, bw=28058KB/s, iops=438, runt= 30011msec slat (usec): min=4, max=73, avg=16.43, stdev= 4.73 clat (usec): min=795, max=36337, avg=17793, stdev=3057 lat (usec): min=805, max=36346, avg=17809, stdev=3057 Testing between eceLinux1 and FreeNAS server at 10Gb/s READ: io=5944.7MB, aggrb=202 823KB/s, minb=50 662KB/s, maxb=50754KB/s, mint=30007msec, maxt=30010msec or 5.9G, 202MB/s, 51MB/s, 9.7ms, 793 iops 10Gb/s eth read : io=1487.5MB, bw=50754KB/s, iops=793, runt= 30010msec slat (usec): min=4, max=184, avg=15.30, stdev= 5.81 clat (usec): min=383, max=56617, avg=9636.57, stdev=3464.32 lat (usec): min=394, max=56628, avg=9652.00, stdev=3465.27 Testing eceLinux1 (10Gb/s) to eceServ (1Gb/s, hybrid SSD & 10k rpm striped FS) READ: io=3348.6MB, aggrb=114226KB/s, minb=28473KB/s, maxb=28606KB/s, mint=30011msec, maxt=30018msec or 3.3G, 114MB/s, 28.5MB/s read : io=857536KB, bw=28574KB/s, iops=446, runt= 30011msec slat (usec): min=4, max=106, avg=19.05, stdev= 3.49 clat (msec): min=4, max=29, avg=17.31, stdev= 2.47 lat (msec): min=4, max=29, avg=17.33, stdev= 2.47 NFS mount over 100Mb connection Run status group 0 (all jobs): READ: io=345 728KB, aggrb=11 457KB/s, minb=2 855KB/s, maxb=2 888KB/s or 346MB, 11MB/s, 2.86 MB/s, 2.89MB/s 100Mb eth READ: io=345792KB, aggrb=11458KB/s, minb=2858KB/s, maxb=2888KB/s, mint=30110msec, maxt=30177msec or 346M, 11.5MB/s, 2.28MB/s, lat 174 ms, 45 iops read : io=86976KB, bw=2888.7KB/s, iops=45, runt= 30110msec slat (usec): min=19, max=148, avg=95.14, stdev=20.98 clat (msec): min=7, max=347, avg=173.71, stdev=37.16 lat (msec): min=7, max=347, avg=173.81, stdev=37.17 100Mb/s ethernet with a very old switch READ: io=345984KB, aggrb=11465KB/s, minb=2854KB/s, maxb=2886KB/s, mint=30099msec, maxt=30177msec or 35M, 11.5Mb/s, 2.9MB/s, lat 176ms, 44 iops read : io=86272KB, bw=2859.5KB/s, iops=44, runt= 30171msec slat (usec): min=4, max=127, avg=11.99, stdev= 8.19 clat (msec): min=50, max=334, avg=176.25, stdev=21.05 lat (msec): min=50, max=334, avg=176.26, stdev=21.05 10Mb/s ethernet via switch downgraded to 10Mb/s per port READ: io=36416KB, aggrb=1145KB/s, minb=279KB/s, maxb=303KB/s, mint=30830msec, maxt=31778msec or 36MB, 1.2MB/s, 279kB/s, 1.80s, 4 iops read : io=8896.0KB, bw=286660B/s, iops=4, runt= 31778msec slat (usec): min=37, max=146, avg=102.63, stdev=15.05 clat (msec): min=948, max=3460, avg=1796.61, stdev=313.59 lat (msec): min=948, max=3460, avg=1796.72, stdev=313.59 NFS over 10Mb/s ethernet using a HUB !! READ: io=33152KB, aggrb=1041KB/s, minb=253KB/s, maxb=284KB/s, mint=30598msec, maxt=31821msec or 33MB, 1.0MB/s, 260kB/s, 1.9s latency, 4 iops read : io=8192.0KB, bw=266423B/s, iops=4, runt= 31486msec slat (usec): min=55, max=205, avg=105.20, stdev=18.38 clat (msec): min=767, max=2647, avg=1920.61, stdev=227.36 lat (msec): min=767, max=2647, avg=1920.72, stdev=227.36 Updated Test - eceServ still 3 x SSD RAID 10 with 3 x 10k RPM Velociraptors but now i3 on SuperMicro with 64G and IB ConnectX-2 to clients Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceLinux2-t 128304M 148413 75 86777 5 136715 14 185525 99 612089 39 2773 13 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 9577 10 +++++ +++ 15481 11 8249 10 +++++ +++ 12548 13 eceLinux2-to-i3-raid10-ssd-10krpm-Serv-NFSoRDMA,128304M,148413,75,86777,5,136715,14,185525,99,612089,39,2772.8,13,16,9577,10,+++++,+++,15481,11,8249,10,+++++,+++,12548,13 Updated Test - eceServ still 3 x SSD RAID 10 with 3 x 10k RPM Velociraptors but now i3 on SuperMicro with 64G and IB InfiniHost III to i7 client Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceLinux1-I 128304M 157857 79 105521 6 136272 12 183754 99 608778 17 3117 11 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 3348 5 +++++ +++ 5620 5 3574 6 20763 9 6845 6 eceLinux1-InfiniHost-to-i3-raid10-ssd-10krpm-Serv-NFSoRDMA,128304M,157857,79,105521,6,136272,12,183754,99,608778,17,3117.1,11,16,3348,5,+++++,+++,5620,5,3574,6,20763,9,6845,6 First test - IB connection, NFSoIB - not via RDMA - to a 7200 rpm HD Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP QDR-IB-NFS 128128M 80589 54 73291 7 40351 7 97960 61 113744 8 140.2 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 115 2 +++++ +++ 115 1 113 2 +++++ +++ 114 1 QDR-IB-NFS,128128M,80589,54,73291,7,40351,7,97960,61,113744,8,140.2,0,16,115,2,+++++,+++,115,1,113,2,+++++,+++,114,1 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP IB-QDR-NFSo 128128M 183721 98 505062 33 251227 24 175136 99 704626 45 9183 45 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 10997 15 +++++ +++ 17611 18 10897 18 +++++ +++ 28937 17 IB-QDR-NFSoRDMA-to-ssd,128128M,183721,98,505062,33,251227,24,175136,99,704626,45,9183.0,45,16,10997,15,+++++,+++,17611,18,10897,18,+++++,+++,28937,17 NFSoRDMA to a CentOS 7 using ZFS with RAIDz using 3 x 250G SSDs using InfiniHost III cards Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP IB-DDR-NFSo 128128M 184130 99 681531 38 334695 35 173072 99 993987 29 5214 10 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1756 6 +++++ +++ 1462 11 1797 6 19544 10 1549 11 IB-DDR-NFSoRDMA-to-RAIDz-3-ssd,128128M,184130,99,681531,38,334695,35,173072,99,993987,29,5214.1,10,16,1756,6,+++++,+++,1462,11,1797,6,19544,10,1549,11 NFSoRDMA to a CentOS 7 using ZFS with RAIDz using 3 x 250G SSDs using QDR ConnectX-2 cards Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP IB-QDR-NFSo 128128M 184594 99 688436 37 339845 34 175052 99 1018267 46 4801 25 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1351 19 +++++ +++ 1237 17 1354 15 +++++ +++ 1436 15 IB-QDR-NFSoRDMA-to-RAIDz-3-ssd,128128M,184594,99,688436,37,339845,34,175052,99,1018267,46,4801.2,25,16,1351,19,+++++,+++,1237,17,1354,15,+++++,+++,1436,15 Test RAIDz - 3 ssd file system on the server Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ZFS-RAIDz-3-ssd 23G 126824 99 766040 79 352183 53 111284 99 815423 41 5484 18 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ 29821 99 +++++ +++ +++++ +++ ZFS-RAIDz-3-ssds-on-server,23G,126824,99,766040,79,352183,53,111284,99,815423,41,5484.1,18,16,+++++,+++,+++++,+++,+++++,+++,29821,99,+++++,+++,+++++,+++ btrfs of 3 ssds in raid 1 on the file server Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP btrfs-3-ssds-on 23G 123233 99 416982 14 170231 14 109029 98 427480 16 +++++ +++ ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ btrfs-3-ssds-on-server,23G,123233,99,416982,14,170231,14,109029,98,427480,16,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ BTRFS with 3 SSDs in RAID 1 on the file server, tested from the client over QDR IB NFSoRDMA NOTE - the server crashed the first time I tried this test. In the past my experience with BTRFS was that it wasn't yet stable Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP IB-QDR-NFSo 128128M 182386 96 368255 26 205813 24 174850 99 532123 34 6633 32 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 14425 16 +++++ +++ 21168 14 14811 17 +++++ +++ 20127 12 IB-QDR-NFSoRDMA-to-BTRfs-3-ssd,128128M,182386,96,368255,26,205813,24,174850,99,532123,34,6632.5,32,16,14425,16,+++++,+++,21168,14,14811,17,+++++,+++,20127,12 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP QDR-IB-NFS- 128128M 185401 98 507588 31 241786 17 168968 99 702065 22 12098 18 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 6631 15 +++++ +++ 9521 20 6466 18 +++++ +++ 21849 18 QDR-IB-NFS-to-ssd,128128M,185401,98,507588,31,241786,17,168968,99,702065,22,12098.4,18,16,6631,15,+++++,+++,9521,20,6466,18,+++++,+++,21849,18 Test SSD on the server with Bonnie++ Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP SSD-speed-on-se 23G 116827 98 551325 35 240756 22 110494 99 666847 28 +++++ +++ ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ SSD-speed-on-server,23G,116827,98,551325,35,240756,22,110494,99,666847,28,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP IB-QDR-NFSo 128128M 187368 99 592361 36 326021 21 176537 99 1081013 56 9373 38 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 14954 16 +++++ +++ 20493 17 14270 17 +++++ +++ 28693 16 IB-QDR-NFSoRDMA-to-2-striped-ssd,128128M,187368,99,592361,36,326021,21,176537,99,1081013,56,9373.2,38,16,14954,16,+++++,+++,20493,17,14270,17,+++++,+++,28693,16 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP SSD-speed-on-se 23G 119803 99 680627 39 301202 25 111599 98 873434 23 +++++ +++ ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ striped-2-SSDs-speed-on-server,23G,119803,99,680627,39,301202,25,111599,98,873434,23,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP NFS1gB-to-s 128128M 111587 59 114273 2 88361 3 123485 71 147153 2 7350 8 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1855 8 +++++ +++ 4340 5 1975 7 2327 10 5243 3 NFS1gB-to-ssd,128128M,111587,59,114273,2,88361,3,123485,71,147153,2,7349.8,8,16,1855,8,+++++,+++,4340,5,1975,7,2327,10,5243,3 InfiniHost III HBAs connected with a SDR switch Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP IB-SDR-to-s 128128M 172504 98 372110 15 205330 14 158240 99 702011 23 8276 33 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 4580 14 +++++ +++ 8401 17 4461 16 +++++ +++ 19318 14 IB-SDR-to-ssd,128128M,172504,98,372110,15,205330,14,158240,99,702011,23,8275.7,33,16,4580,14,+++++,+++,8401,17,4461,16,+++++,+++,19318,14 This was testing local HD on the client machine?? NFS connection lost??? Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP NFS1gB-to-s 128128M 93253 52 78956 3 39927 2 104771 61 126355 3 146.6 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 116 2 32004 28 115 1 114 2 2241 23 115 1 NFS1gB-to-ssd,128128M,93253,52,78956,3,39927,2,104771,61,126355,3,146.6,0,16,116,2,32004,28,115,1,114,2,2241,23,115,1 OLD Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceLinux3-NF 63840M 61707 94 97820 9 44910 10 62830 99 104103 10 1606 7 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 911 3 1911 4 933 1 910 4 2074 20 1019 1 eceLinux3-NFS-Serv-1SSD,63840M,61707,94,97820,9,44910,10,62830,99,104103,10,1606.1,7,16,911,3,1911,4,933,1,910,4,2074,20,1019,1 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceLinux4-NFS-S 15G 76019 87 88916 7 50145 14 83809 98 131553 16 4608 22 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 863 7 14247 17 1000 8 871 8 2316 11 996 8 eceLinux4-NFS-Serv-1SSD,15G,76019,87,88916,7,50145,14,83809,98,131553,16,4608.5,22,16,863,7,14247,17,1000,8,871,8,2316,11,996,8 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceLinux4-NFS-s 15G 79807 89 96881 7 50348 14 80505 99 144033 15 4902 23 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 866 7 13994 17 1003 7 879 7 2338 9 1000 7 eceLinux4-NFS-serv,15G,79807,89,96881,7,50348,14,80505,99,144033,15,4901.9,23,16,866,7,13994,17,1003,7,879,7,2338,9,1000,7 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceLinux3-NF 63840M 78349 94 93492 7 45249 10 83076 99 110195 12 1279 5 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1122 4 2531 6 1143 4 1098 6 2795 6 1366 3 eceLinux3-NFS-serv,63840M,78349,94,93492,7,45249,10,83076,99,110195,12,1279.5,5,16,1122,4,2531,6,1143,4,1098,6,2795,6,1366,3 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceLinux5-NFS-s 63G 87956 86 88906 14 42920 13 98124 94 103878 17 1289 6 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1106 4 2569 3 1145 3 1088 4 2791 4 1361 3 eceLinux5-NFS-serv,63G,87956,86,88906,14,42920,13,98124,94,103878,17,1288.9,6,16,1106,4,2569,3,1145,3,1088,4,2791,4,1361,3 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ecewo32SSD 31976M 58493 81 60873 20 31223 21 57524 90 77272 15 189.3 1 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ ecewo32SSD,31976M,58493,81,60873,20,31223,21,57524,90,77272,15,189.3,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ ECEServ++ -m eceServBoot -d /tmp -u root Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceServBoot 63704M 82065 80 73747 9 38839 6 97081 92 197248 22 761.3 5 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ eceServBoot,63704M,82065,80,73747,9,38839,6,97081,92,197248,22,761.3,5,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ ECEServ ./bonnie++ -m eceServBoot -d /home/sysadmin/junk -u root Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceServBoot 63704M 103424 98 561295 52 179538 28 100067 96 463026 54 1560 10 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ eceServBoot,63704M,103424,98,561295,52,179538,28,100067,96,463026,54,1559.6,10,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ eceWO3 - Adaptec 6805e RAID 840 EVO 250G 6GB/sec and WD10EADS-65M2B0 3GB/sec in RAID mirror Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ecewo3 62G 53191 85 109845 23 82854 13 63523 96 543786 58 +++++ +++ ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ ecewo3,62G,53191,85,109845,23,82854,13,63523,96,543786,58,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ecewo3-WOdb-NFS 62G 55190 97 59859 9 22796 5 39248 56 75772 7 2866 14 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1016 9 22628 31 884 9 853 9 3555 15 628 5 ecewo3-WOdb-NFS-share,62G,55190,97,59859,9,22796,5,39248,56,75772,7,2865.9,14,16,1016,9,22628,31,884,9,853,9,3555,15,628,5 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceLinux3-NF 63840M 45897 75 46221 5 29728 7 61443 98 71283 8 846.0 3 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 371 2 765 3 375 2 360 2 776 2 393 2 eceLinux3-NFS-serv,63840M,45897,75,46221,5,29728,7,61443,98,71283,8,846.0,3,16,371,2,765,3,375,2,360,2,776,2,393,2 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP SuperM Cent 128168M 182710 99 1061873 29 311626 15 164701 97 331800 8 3547 7 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 6027 11 +++++ +++ 6181 12 6127 12 +++++ +++ 11102 9 SuperMicro FreeNAS 10Gbe with 4x 1TB SSD to SuperMicro 10Gbe CentOS 7,128168M,182710,99,1061873,29,311626,15,164701,97,331800,8,3547.2,7,16,6027,11,+++++,+++,6181,12,6127,12,+++++,+++,11102,9 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP Linux12i7-t 128168M 11464 7 10016 0 7156 0 14406 8 14368 0 1745 3 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1095 0 28088 0 1638 0 1362 0 4419 36 2329 0 Linux12i7-to-FreeNAS-SSD-at1Gb,128168M,11464,7,10016,0,7156,0,14406,8,14368,0,1745.3,3,16,1095,0,28088,0,1638,0,1362,0,4419,36,2329,0 Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP Linux12i7-t 128168M 11468 14 11467 1 6901 1 12404 24 13175 1 1070 6 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 493 11 17779 9 677 9 499 12 1042 32 1005 0 Linux12i7-to-FreeNAS-SSD-at100Mb,128168M,11468,14,11467,1,6901,1,12404,24,13175,1,1070.0,6,16,493,11,17779,9,677,9,499,12,1042,32,1005,0 March 13, 2016 - AMD 8 core to eceServ with RAID 10 over 1Gb/s Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP Linux3AMD-to 63840M 63975 96 96058 7 45568 10 65800 99 100507 10 1960 7 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1113 3 2531 6 1142 6 1106 4 2792 5 1382 5 Linux3AMD-to-eceSERV-SSD,63840M,63975,96,96058,7,45568,10,65800,99,100507,10,1960.0,7,16,1113,3,2531,6,1142,6,1106,4,2792,5,1382,5
Using Bonnie++ (http://www.coker.com.au/bonnie++/ ) I've tested the performance of various systems.
./configure ./make ./bonnie++ -m ServerName -d /tmp -u regular_user
CP is CPU Usage stats
The tests are seqential file creation, sequentially delete files, create files in random order.
Server | Size | Sequential Output | Sequential Input | Random Seeks | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Per Char K/sec | %CP | Per Block K/sec | %CP | Rewrite K/Sec | %CP | Per Chr K/sec | %CP | Block | %CP | Random Seeks /sec | %CP | Files | ||||
woDB SSD RAID-1 - May2014 | 32112M | 68625 | 99 | 255517 | 62 | 116224 | 22 | 75361 | 98 | 270482 | 19 | +++++ | +++ | 16 | ||
woDB - May2014 | 32112M | 69039 | 99 | 260258 | 63 | 118080 | 22 | 75692 | 98 | 261735 | 18 | +++++ | +++ | 16 | ||
ieee - May2014 | 15G | 89242 | 98 | 84128 | 13 | 42106 | 12 | 77584 | 83 | 193194 | 25 | 514.0 | 3 | 16 | ||
ieee 15k SAS RAID-1 - May2014 | 15G | 94683 | 99 | 154531 | 22 | 63067 | 18 | 87611 | 91 | 171622 | 25 | 588.8 | 4 | 16 | ||
Arbeau 300G RAID-1 10k rpm - May2014 | 15G | 80732 | 84 | 78604 | 8 | 39446 | 10 | 77845 | 79 | 112135 | 11 | 500.5 | 2 | 16 | ||
arbeau-1SSD | 31G | 77683 | 97 | 108299 | 14 | 53168 | 13 | 75634 | 92 | 219774 | 24 | 5804.9 | 22 | 16 | ||
Serv - May2014 | 63704M | 81685 | 80 | 73558 | 10 | 39226 | 7 | 91200 | 87 | 163871 | 19 | 701.7 | 5 | 16 | ||
eceLinux1 - May2014 | 63840M | 48923 | 62 | 69092 | 15 | 31357 | 8 | 69075 | 91 | 74402 | 8 | 209.7 | 0 | 16 | ||
Home CPU xUbuntu 500G HDD M4A785 MB - Dec2014 | 31968M | 75112 | 94 | 82801 | 16 | 32684 | 9 | 74429 | 89 | 96816 | 15 | 160.0 | 1 | 16 | ||
Admin - May2014 | 15464M | 47699 | 74 | 66472 | 22 | 29934 | 10 | 64674 | 84 | 70348 | 11 | 285.0 | 1 | 16 | ||
System - May2014 | 6576M | 50305 | 72 | 53666 | 10 | 24860 | 16 | 37116 | 63 | 72527 | 14 | 4544.5 | 35 | 16 | ||
System WD 1TB Green - May2014 | 6576M | 53832 | 74 | 44700 | 11 | 20513 | 12 | 45640 | 66 | 48859 | 10 | 213.8 | 1 | 16 | 30828 | 99 |
System 40G PATA + SSD - May2014 | 6576M | 52834 | 71 | 57261 | 14 | 26660 | 15 | 41495 | 66 | 71863 | 15 | 4552.5 | 35 | 16 | ||
System 40G PATA (SSD stayed in array!) - May2014 | 6576M | 53226 | 72 | 54672 | 12 | 32294 | 20 | 49952 | 73 | 71521 | 14 | 4988.3 | 33 | 16 | ||
Linux4 - May2014 | 15G | 77727 | 90 | 77773 | 9 | 34234 | 9 | 77477 | 86 | 114513 | 13 | 422.1 | 2 | 16 | ||
Web - May2014 | 15464M | 70110 | 96 | 122212 | 20 | 43551 | 12 | 67269 | 80 | 97098 | 13 | 663.2 | 3 | 16 | ||
eceWebSSD - Nov 2014 | 39G | 76712 | 96 | 128349 | 21 | 57215 | 14 | 84070 | 86 | 229732 | 32 | 5445.0 | 27 | 16 | ||
Mail - May2014 | 31G | 67163 | 92 | 90264 | 23 | 37505 | 13 | 68759 | 76 | 111940 | 15 | 284.2 | 1 | 16 | ||
Home M4A785 500G WD HD xUbuntu Dec 2014 | 31968M | 75112 | 94 | 82801 | 16 | 32684 | 9 | 74429 | 89 | 96816 | 15 | 160.0 | 1 | 16 |
System | Bonnie++ Results |
---|---|
Arbeau, one SSD in parallel with Velocraptor | Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP eceLinux5-NFS-s 63G 85018 88 88658 15 43126 13 99133 99 105891 18 1625 6 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1099 5 2572 5 1113 5 962 5 2772 3 1265 3 eceLinux5-NFS-serv1SSD,63G,85018,88,88658,15,43126,13,99133,99,105891,18,1625.2,6,16,1099,5,2572,5,1113,5,962,5,2772,3,1265,3 |
Arbeau, one SSD in parallel with Velocraptor | arbeau-1SSD,31G,77683,97,108299,14,53168,13,75634,92,219774,24,5804.9,22,16, |
Web 3Ware raid-1 10k Velociraptor and 512M Samsung SSD | eceWebSSD,39G,76712,96,128349,21,57215,14,84070,86,229732,32,5445.0,27,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ |
Serv raid-1 5xSAS HDs and 1x1TB Samsung SSD | eceServHome-1SSD,63704M,102237,98,551985,52,194237,29,102676,97,518187,62,2003.3,13,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ |
Web 3Ware raid-1 10k Velociraptor | Mail,31G,67163,92,90264,23,37505,13,68759,76,111940,15,284.2,1,16, |
Web 3Ware raid-1 10k Velociraptor | Web,15464M,70110,96,122212,20,43551,12,67269,80,97098,13,663.2,3,16, |
ieee 15k rpm SAS MD RAID-1 | ieee,15G,94683,99,154531,22,63067,18,87611,91,171622,25,588.8,4,16 |
System - MD RAID-1 array 1TB WD Green drives | System,6576M,53832,74,44700,11,20513,12,45640,66,48859,10,213.8,1,16,30828,99, |
Linux4 (Adaptec mirror 7200rpm SATA | Linux4,15G,77727,90,77773,9,34234,9,77477,86,114513,13,422.1,2,16, |
System re-run as root | System,6576M,52834,71,57261,14,26660,15,41495,66,71863,15,4552.5,35,16, |
woDB | woDB,32112M,68625,99,255517,62,116224,22,75361,98,270482,19,+++++,+++,16, |
woDB re-run as root | woDB,32112M,69039,99,260258,63,118080,22,75692,98,261735,18,+++++,+++,16, |
ieee, CentOS 6 MD raid with LSI raid with 2 x 300G 15k rpm SAS and 2nd MD raid member is 7200rpm SATA | Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ieee 15G 89242 98 84128 13 42106 12 77584 83 193194 25 514.0 3 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 ieee,15G,89242,98,84128,13,42106,12,77584,83,193194,25,514.0,3,16, |
arbeau, CentOS 6 3Ware raid 1, 300G 10k rpm | Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP Arbeau 15G 80732 84 78604 8 39446 10 77845 79 112135 11 500.5 2 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 Arbeau,15G,80732,84,78604,8,39446,10,77845,79,112135,11,500.5,2,16 |
Serv 15k rpm SAS 6 drives raid 10 | Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP Serv 63704M 81685 80 73558 10 39226 7 91200 87 163871 19 701.7 5 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 Serv,63704M,81685,80,73558,10,39226,7,91200,87,163871,19,701.7,5,16 |
Linux1, 7200rpm SATA | eceLinux1,63840M,48923,62,69092,15,31357,8,69075,91,74402,8,209.7,0,16 |
MD raid 160G SATA 7200 rpm | Admin,15464M,47699,74,66472,22,29934,10,64674,84,70348,11,285.0,1,16, |
System - MD Raid 40G PATA without SSD?? Seems to have stayed in the array | System,6576M,53226,72,54672,12,32294,20,49952,73,71521,14,4988.3,33,16, |
System, MD raid, 40G PATA drive and SSD | System,6576M,50305,72,53666,10,24860,16,37116,63,72527,14,4544.5,35,16, |