March 17, 2019

Fios Issue #2: tracert on Windows

Odd behavior was observed on Windows using Fios.  When I ran 'tracert' in cmd on Windows 10, it showed only two hops.  But on Linux it showed all the hops.

See this on Windows:



On Linux:


In short, it's Verizon Fios interfering with ICMP as Windows using ICMP, but Linux is using UDP.  I also tried Ubuntu on Windows, but got the same results of two hops.  It seems only true Linux traceroute works properly with Fios.

Solutions


1. CountryTraceRoute -
http://www.snapfiles.com/get/ctraceroute.html
Simple application, not perfect, but seems to be ok.  Below screenshot is from the download site, but when it's ran, it shows same information as tracert for #1, #2 hops, but shows more afterward.



2. NMAP - https://nmap.org
This is the real replacement of tracert; it requires to install npcap.  (WinCap is no longer developed.)


Use:
nmap -sU --traceroute www.google.com

References

Discussion

Windows MTR

Note that below is here for reference purpose.  Using 'mtr' did not fix the issue nor worked around the Fios issue.



March 13, 2019

Fios Issue #1: DNS

Run cmd and run nslookup.  Lookup on any words, like "badfios".  If it resolves to 92.242.140.21, then it is Verizon doing this.

That IP resolves to (search for "reverse IP lookup") "unallocated.barefruit.co.uk" Verizon has some kind of agreement with that company to redirect to sponsored sites instead.

Change FiOS router DNS to:

71.250.0.14
71.242.0.14


If the entry were "automatic", but before you enter into edit mode, it should show:

71.250.0.12
71.242.0.12


See here for more info:
https://forums.verizon.com/t5/Fios-Internet/FIOS-DNS-Hack-Directed-to-unallocated-barefruit-co-uk92-242-140/m-p/726545#M49607
    djjsin Contributor
    Posts: 4
    Registered: ‎07-11-2014
    Re: FIOS DNS Hack Directed to unallocated.barefruit.co.uk92.242.140.21

    ‎08-08-2014 10:00 AM Message 10 of 18
    (33,970 Views)

    I got a response from Verizon today about this.

    "This is expected bahavior.  The Verizon Online DNS resolvers have NXDOMAIN redirection services that redirect any unknown host to a sponsored search page.  You can opt out of this by changing your resolver from .12 to .14."


How to change the DNS in FIOS router -- follow this page.  Image may look little different but the steps are correct.


[Parental Control] OpenDNS, ddclient

There are several ways of parental control, and usually single solution is not enough.
Here is how internet is used from home setting:

User → device (ipad, computer) → [home router] → Internet
                                     
                                    [DNS]

DNS is like a yellow book.  When user types this URL "www.google.com" in the browser, computer looks it up on DNS, it responses with it's address ("IP"), and then it uses the IP to get to the destination.
 
Parental control can be done in each layer:
  • User (by parents)
  • Device
  • Router
  • DNS
In this posting, User and DNS level control is discussed.

Parental Control at User Level

Educating children and set usage limit is the most important.

Set rules on:
  1. Time
  2. Place
  3. Content
I trust my kids, but don't trust those sites.  They want traffic to make money and they'll do anything to trick or attract people to visit the sites.
  • Time limit - duration of use and time of the day. 
  • Agree on where in the house the devices will be used.
  • Talk about content types - what's inappropriate, and some sites may be harmful for them and also might damage the device (e.g. virus).

Parental Control with DNS

Blocking is done at DNS, it simply denies to give the address for inappropriate sites.  Set devices to use OpenDNS to block inappropriate sites.

Two methods to change it:
  1. On each device
  2. On home router
Changing DNS setting in device is different for each device and for routers.  If you don't know how, just search for it, or visit this page - https://www.opendns.com/setupguide/#familyshield.

A couple of ways to use OpenDNS:
  1. Use predefined settings:
    Just sent your devices' DNS to these without any registration.  It has pre-configured family setting (for "Family Shield"): 
    • 208.67.222.123
    • 208.67.220.123
  2. Use custom settings:
    Register with OpenDNS, update your IP with them, and it will block with customization -- custom category, custom blacklist (unfortunately, limited to 25 entries).
Custom OpenDNS Settings

Benefits of using custom OpenDNS:
  • Custom message on blocked sites
  • Customize categories to block
  • Customized black/white lists (up to only 25 though)
I won't go into details here, however.  It is assumed you have some advanced knowledge, otherwise search on the topic.
  1. Register - https://signup.opendns.com/homefree/
  2. Update your dynamic IP with OpenDNS, one of these methods:
    1. via web page, manually
    2. Windows
    3. Linux
    4. Mac
These days, with high speed internet, even if it's dynamic IP, it doesn't change often.  So even your computer is turned on once every a few days for short period of time, running on that computer to update the IP with OpenDNS will be suffice.

For Windows and Mac, just search for "ddclient" and will find the applications.  Examples (not tried):
For Linux, I use 'ddclient' to update the setting at OpenDNS.  As of January 2019, OpenDNS has changed a few things around, and the older way (using ddclient directly to opendns) doesn't work any longer.  You must use dnsomatic until OpenDNS change their way.
  1. Set DNS to 208.67.222.222 and 208.67.220.220 (different from FamilyShield DNS)
  2. Go to https://www.dnsomatic.com and use your OpenDNS ID/PW.  And set up things there.
  3. Set up ddclient, or wget/curl.
ddclient settings for dnsomatic:

use=web, web=myip.dnsomatic.com
server=updates.dnsomatic.com,      \
protocol=dyndns2,                  \
login=dnsomatic_username,          \
password=dnsomatic_password        \
all.dnsomatic.com


curl or wget:

curl --user "username:password" "https://@updates.dnsomatic.com/nic/update?hostname=hostname"

wget --user "username" --password="password" "https://@updates.dnsomatic.com/nic/update?hostname=hostname"



Advantages of using ddclient:
  • It supports other dynamic IP DNS services.  (Now with dnsomatic.com service, you can also do that with dnsomatic settings.)
  • ddclient caches IP address it updated previously and if it hasn't been changed, it won't update again.

March 9, 2019

Win10 Privacy

Windows 10 sends a lot of information back to Microsoft.  May be less harmful than using Google, but still this is very uncomfortable facts.

A couple of good utilities to control this:
Also change the account to local account - https://support.microsoft.com/en-us/help/4027068/windows-10-switch-your-device-to-a-local-account


WPD
























O&O


March 8, 2019

Notes on CUDA and Tensorflow

This is a note to myself.  I just had to re-install TensorFlow and wanted to put some notes for the record.

This is about installing CUDA, Anaconda, TensorFlow.




Environment:Win10 Pro 64-bit

I have old GPU graphics cards:
I got the Tesla for GPU programming many years ago before TensorFlow came out, and paid good money for it, but now it's only $60-$70 on eBay.

I still can use above old cards with frameworks other than just Tensorflow. So to be compatible with all my cards, I have to stick to CUDA 8 for the older cards.  The latest CUDA is 10.1, and requires latest GPUs. 

Due to some TensorFlow work I had to do last year, I bought somewhat latest graphics card for it:
TensorFlow GPU requires minimum compute capability of 3, Quadro P2000 performs decently for experimental work.  I used it along with AWS -- when I compared to AWS GPU small machine configuration with P2000; and duration of the work, and disk space, etc -- I found buying this graphics card is a good decision.  For larger projects with good budget, I would explorer AWS option.

So there are three graphics cards: two connected to actual monitors, and P2000 is just used for GPU only with TensorFlow.

To use TensorFlow with GPU for Quadro P2000, and also to be backward compatible with the older cards to use with other frameworks/C++, etc:
Next, install python 3.5 for TensorFlow v1.3 with CUDA8: Run Anaconda console.  By the way, I use ConEmu, so use this entry for Anaconda task:

%windir%\System32\cmd.exe "/K" C:\opt\Anaconda3\Scripts\activate.bat C:\opt\Anaconda3

And in Anaconda console, create environment for TensorFlow:

(base) C:\Users\kkim> conda create --name tensorflow python=3.5(base) C:\Users\kkim> activate tensorflow

Then install all the required packages:

(tensorflow) C:\Users\kkim>conda install pandas matplotlib jupyter notebook scipy scikit-learn numpy nb_conda pillow h5py pyhamcrest cython

Now, install TensorFlow and Keras, if you don't want Keras, you can just install TF only.  In order to install Keras, you have to follow this odd steps: install TF, install Keras, uninstall TF, and then install TF again.

It's because Keras needs TF to be installed, but after installing Keras, it messes up something and there will be an issue with TF.  So the solution to this problem is uninstall TF and re-install.  This will fix it.  See Reference#4:

(tensorflow) C:\Users\kkim>pip install keras
(tensorflow) C:\Users\kkim>pip install tensorflow-gpu==1.3
(tensorflow) C:\Users\kkim>pip uninstall tensorflow-gpu
(tensorflow) C:\Users\kkim>pip install tensorflow-gpu==1.3

Due to use of CUDA8, Only TF v1.3 can be used.  Later version of TF requires newer version of CUDA.

All done.  Now time to have fun with Jupyter and TF.  TF will use Quadro P2000 only, but with CUDA SDK and other frameworks can use all three cards.

Just a note...


Here is the output of  deviceQuery -- deviceQuery is an example program that comes with CUDA SDK from nVidia:

deviceQuery.exe Starting...
 

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 3 CUDA Capable device(s)

Device 0: "Quadro P2000"
  CUDA Driver Version / Runtime Version          8.0 / 8.0
  CUDA Capability Major/Minor version number:    6.1
  Total amount of global memory:                 5120 MBytes (5368709120 bytes)
  ( 8) Multiprocessors, (128) CUDA Cores/MP:     1024 CUDA Cores
  GPU Max Clock rate:                            1481 MHz (1.48 GHz)
  Memory Clock rate:                             3504 Mhz
  Memory Bus Width:                              160-bit
  L2 Cache Size:                                 1310720 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Model)
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 9 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 1: "Tesla C2050"
  CUDA Driver Version / Runtime Version          8.0 / 8.0
  CUDA Capability Major/Minor version number:    2.0
  Total amount of global memory:                 3072 MBytes (3221225472 bytes)
  (14) Multiprocessors, ( 32) CUDA Cores/MP:     448 CUDA Cores
  GPU Max Clock rate:                            1147 MHz (1.15 GHz)
  Memory Clock rate:                             1500 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 786432 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65535), 3D=(2048, 2048, 2048)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (65535, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Model)
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 5 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 2: "Quadro 600"
  CUDA Driver Version / Runtime Version          8.0 / 8.0
  CUDA Capability Major/Minor version number:    2.1
  Total amount of global memory:                 1024 MBytes (1073741824 bytes)
  ( 2) Multiprocessors, ( 48) CUDA Cores/MP:     96 CUDA Cores
  GPU Max Clock rate:                            1280 MHz (1.28 GHz)
  Memory Clock rate:                             800 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 131072 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65535), 3D=(2048, 2048, 2048)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (65535, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Model)
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 6 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 3, Device0 = Quadro P2000, Device1 = Tesla C2050, Device2 = Quadro 600
Result = PASS


References

  1. https://ulrik.is/writing/keras-tensorflow-with-cuda-8-and-cudnn-on-windows-10/
  2. cuda version, tensorflow version match - https://stackoverflow.com/questions/50622525/which-tensorflow-and-cuda-version-combinations-are-compatible#50622526
  3. https://medium.com/@minhplayer95/how-to-install-tensorflow-with-gpu-support-on-windows-10-with-anaconda-4e80a8beaaf0
  4. Installing TensorFlow with Keras: https://github.com/keras-team/keras/issues/5776
  5.  CUDA version, tensorflow version match - https://stackoverflow.com/questions/50622525/which-tensorflow-and-cuda-version-combinations-are-compatible#50622526
  6. https://medium.com/@minhplayer95/how-to-install-tensorflow-with-gpu-support-on-windows-10-with-anaconda-4e80a8beaaf0

March 7, 2019

Bad USB - Part 1: Setting up H/W

What is it? 

It’s a USB device, connects to a host computer as a HID device (keyboard or mouse, or both), and per-programmed payload can be sent to the host computer.

This is really a security issue.  Watch this video:



There are pre-made BadUSB devices you can purchase on the net, or DIY using Arduino, RPi and other micro-controllers. I decided to make one as an experiment, not for hacking.  It's for automation and remote controlling that doesn’t require any S/W on the target machine.

I used RPi zero W, set up to be HID keyboard and mouse.  Connect this device to target machine’s USB port, and the host computer will recognize it as Keyboard and mouse.  I can send commands to RPi, then RPi will send key strokes or mouse movements to the host computer.

Requirements

  • Testing Target Machine: USB enabled systems – e.g. Linux, Windows, RPi
  • RPi Zero W (zero will work fine, but wireless will be needed for remote control) with microSD and RPi OS installed.  I have a few RPi zeros lying around, and for my purpose, this is better than using Arduino or other simple micro controllers.
  • RPi H/W modification (DIY) or Kit

RPi H/W Kit

Use this $5-$6 PCB board to RPi zero, instead of  DIY.  DIY is actually pretty simple, almost zero cost, but seems not that sturdy.  I also bought a case for zero and the addon board from here for $3.

Add on board look like this:
I purchased above case and addon board from Banggood, and I found this place has a package that comes with both for $13.99 -- https://geekworm.com/products/raspberry-pi-zero-w-badusb-usb-a-addon-board-usb-connector-case-kit

DIY mod

See this picture from this site - https://www.novaspirit.com/2016/10/18/raspberry-pi-zero-usb-dongle:

Raspberry Pi Model

Which models supports OTG (https://en.wikipedia.org/wiki/USB_On-The-Go)?  There is a nice chart on this page (https://gist.github.com/gbaman/50b6cca61dd1c3f88f41), and it shows only model Zero supports OTG, but according to this page, model A and A+ also supports it.  And someone posted this comment here (https://www.element14.com/community/thread/49633/l/have-a-raspberry-pi-3-model-b-usb-otg-port):
The model A and A+ have the USB port of the chip routed to a connector. Officially the wrong connector for OTG, but the signals are there.
To be safe, use model Zero or Zero W.  Zero W is preferred to use Wifi for remote development and control.

Set up RPi and H/W

There are many more steps – so for now, just get all the required H/W and set up RPi.  Download Raspbian Stretch Lite image (https://www.raspberrypi.org/downloads/raspbian/) and install on SD (https://sourceforge.net/projects/win32diskimager/), set it up as usual.  Note that the lite image does not have GUI.

Part 2, I’ll write about setting HID.
Part 3 will be about programming in C and Python.

Linux Software

If host computer is Linux and badUSB is attaching to it, it's helpful to check what devices are attached to the host.  Run following Linux s/w:

$ sudo lsusb -v
$ sudo lsinput
$ sudo udevadm monitor --udev


Windows Software

Following Windows software will be helpful with USB devices – they give you great details on USB devices:




References


March 6, 2019

Kinect 2 and Windows 10 - face recognition login ("Windows Hello"), SDK, python, etc.


Note on setting up "Windows Hello"

 

H/W, OS

  • Kinect2 -- XBox One Kinect
  • Kinect2 USB3 adapter
  • PC with USB3 port
  • Windows 10 pro 64-bit

My kids stopped using Kinect2 for a while, so I took it and bought the adapter for PC (around $30-$35).  You can buy Kinect2 from ebay around $40-$90.  Btw, USB3 port is required.

Install S/W

If you don't want to install Kinect2 SDK (download page), follow this instruction

Known Issues

  1. 2.0 SDK and developer known issues
    1. Troubleshooting and Common Issues
  2. Kinect2 diconnects/reconnects
  3. Kinect2 freezing, restarting 
  4. Other things you may want to try
I had disconnect/reconnect or freeze/restart problem -- this was due to Win10 privacy settings I changed.  After following above #2, #3 -- it's fixed.


Python S/W

So far, this is the only one works well -- https://github.com/Kinect/PyKinect2

Windows Hello is great.  IIRC, there were some solutions like this a while back before Win10.  It was a commercial s/w using regular webcam, but I remember it wasn't cheap for just for logging in.

Win10 Hello face recognition is instantaneous.  Python S/W is good to play around.

Other Dev Resources



[OwnServer] Gitea -- host own github, gitlab


Github and Bitbucket are very useful and Bitbucket even allow unlimited number of free private repos but with 1GB storage limitation.  However, I want to run my own service at home for privacy, security issues and to overcome the storage limitation for some projects.

I tried GitLab, but found it was too heavy for the computer I use as a personal server at home which is an old PC with small amount of memory and not so fast CPU.

Here, I found Gitea, lighter version of GitLab. Installation is very simple (follow this instruction).  Since I'm the only user at home, I set it up to use sqlite (default configuration), although I do run MySQL.  It looks and feels like Gitlab but very light and more than enough for small team.





Note

[OwnServer] is a series of blog posts about running own server and services on an old PC.  When cloud services and other public/commercial/free SaaS, SNS popped up everywhere, I moved everything to the cloud and services Later many of them were hacked, information was leaked, and some companies do not acknowledge its security issues and flaws.  I still use a few services but mostly for unimportant stuff or something that can't be/too hard to run on small server at home.  Surprisingly, there are only a few things I couldn't run on  home server.  There are many open source projects that can run on an old PC at home that can provide the same features as public/commercial services.  This blog series is notes on installing/using them.



March 4, 2019

[OwnServer] Schedule server shutdown/sleep and wake up

I'm running personal server at home and it doesn't need to be running 24x7.  It's set to shutdown at late night and start up in the morning or afternoon.

Environment:
  • OS: Ubuntu 16.04
  • Hardware clock is set to UTC

Add following cron entries for 'root' usr:

0 2 * * Sun-Thu /usr/sbin/rtcwake -m mem --date 14:00  >> /opt/automation/logs/rtcwake.log 2>&1

0 3 * * Fri-Sat /usr/sbin/rtcwake -m mem --date 08:00  >> /opt/automation/logs/rtcwake.log 2>&1



This is using "rtcwake" command.
-m : shutdown/suspend/sleep method.  "mem" is suspend to memory.
--date: wake up date in 24-hr format

It's set to suspend at 2am/3am -- plenty of time to do some batch job/back up that starts at night, then shutdown (or suspend).

References


March 3, 2019

Eclipse, tab to space



Use spaces instead of tabs:
  1. Windows > Preferences > Java > Code Style > Formatter > New (create new profile)
  2.  Edit > "Tab policy" > "Space only" > [OK]
  3. Windows > Preferences > General > Editors > Text Editors > "Insert spaces for tabs" > [Apply and Close]

Replaces tab with spaces when save:
  1. Windows > Preferences > Java > Editor > Save Actions
  2. [X] Additional Actions > [Configure] > Formatter [X] Correct indentation 
  3. [OK] > [Apply and Close]