tdkyo

Clean and Undisturbed Space for Thoughts and Writing

Recent Essays

    Need to Fund and Enjoy Art

    If somebody is reading this in the far future, I would like to emphasize that this post was written in the midst of the COVID-19 outbreak, where state and local governments in the U.S. have instituted stay-at-home orders to contain the spread of the Coronavirus. Perhaps, the isolation imposed by the pandemic has given some room in my head to think about certain concepts, such as art, at a deeper level.

    As I was driving around town to complete my groceries, I decided to listen to my local PBS radio station to listen to classical music. Music of swimming emotions started to pour out from the car’s speakers as the string instruments lead the melody for the rest of the orchestra to follow and echo. At certain points, it seemed the music was speaking to me directly to tell a story of a quiet but adventurous country-side from times past. There was a sense of nostalgia of a familiar place that was never visited as the music came to a close.

    The music was The Lark Ascending by the English composer Ralph Vaughan Williams. I decided to listen to the piece again by searching Youtube, and I was pleasantly surprised by the civil comment section, where people were pouring their hearts out. You could see a rare moment on the Internet, where people were genuinely their best of themselves. In case the video, comment, or Youtube disappears from the Internet, I share some heartfelt interactions.

    Responsive image Responsive image Responsive image Responsive image

    This last comment needs additional attention. From classical music to jazz performances, real high-art music has the ability to bring the best of our humanity to ourselves. Visual art, whether it be static paintings or photographs, can stir a deep feeling that can initiate a long train of introspection. Performance art, whether it be a Shakespeare’s play or a Broadway show, confronts the audience with the story where the human stakes are high.

    Art, whose creation is not solely commercially driven, has the ability to engage the audience beyond mere entertainment. My experience with classical music always includes stirring up my creativity based on the thematic experience illustrated by the composure. I believe other people’s experience with art is different from mine, but I also believe that there is a common agreement that art is a necessity that can help us maintain our best of humanity in this world.

    Unfortunately, COVID-19 has shutdown the art world as governments are racing to develop a vaccine. I am afraid that we might lose a significant portion of talents due to the financial strain COVID-19 has imposed on the art community. I believe it is worth thinking about using government funds to temporarily provide financial assistance to artists until the COVID-19 situation passes. Granted, the U.S. government and various state governments are under budgetary strain, and it is difficult to consider additional budgetary spending when our national budget deficits are skyrocketing. However, I have this uneasy feeling that we might be losing something permanently from our art community. Without drastic measures to protect our art community, I fear that we might be facing an irreversible cultural loss for future generations.

    Performance art school closures; art venues closing shop; artists deciding to change careers permanently due to the financial hardship. I don’t want to live in a future where art is dominated by social media influencers and live streamers hunting down for likes. Hopefully, more people would start investing time in enjoying art and realizing the value of funding art (whether through purchasing tickets to an art showing or maintaining membership of a local museum).

    For me, I am looking at some audio CDs to buy to bolster my classical music library. Once this pandemic is over, I am looking forward to attending my local museums to expand my art horizon even further.

    Auto Check and Reconnect WiFi for a Raspberry Pi

    I have several headless Raspberry Pis around my house, and some of the Raspberry Pis use WiFi to connect to the home network. The WiFi connections are pretty stable most of the time, but I sometimes notice that my Pis get disconnected from WiFi and never reconnect to the home network. Usually, I can solve this problem by either turning off and on the WiFi antenna via command prompt on the affected Pi (which requires hooking up a keyboard, mouse, and a monitor) or simply rebooting the system by disconnecting and reconnecting the power supply. Although these two solutions solved the problem, I wanted to manage the problem by setting up an automated solution to the WiFi connection problem.

    How can I have the Pi disconnect and reconnect the WiFi antenna whenever there is a WiFi problem? Using a prewritten bash script and cronjob, I was able to have my Raspberry Pi (1) auto-check whether there was a WiFi problem, and (2) reconnect to the WiFi network by disconnecting and reconnecting the WiFi antenna.

    Creating the bash script

    First, I created a bash script on my home directory by using GNU nano. I named the bash script “WiFichecker.sh”.

    cd && nano WiFichecker.sh

    The cd command changes the current working directory to the user’s home directory. && or AND operator allows me to run consecutive commands, where the subsequent command (nano WiFichecker.sh would run only if the previous command (cd) ran successfully. Finally, nano WiFichecker.sh opens up nano (our text editor) with a bash script file named “WiFichecker.sh” ready to be saved.

    Once nano is open, we can write the following commands in sequence on each line.

    ping -c4 google.com > /dev/null

    To determine whether we have a WiFi connection, we simply use the ping command to check whether our Raspberry Pi can connect to one of Google’s servers. Parameter -c4 indicates that we want to ping Google four times in case the first few pings do not work. The > operator redirects any output of ping -c4 google.com to a specified destination right of the operator. The > /dev/null ensures that all of the output of the ping command gets thrown away. (/dev/null can be seen as a black hole in Linux)

    For the next block of code, it makes sense to see it as a whole in multiple lines.

    if [ $? != 0 ]
    then
            sudo ip link set wlan0 down
            sleep 15
            sudo ip link set wlan0 up
            sleep 15
            sudo dhcpcd
    else
            echo "nothing wrong"
    fi
    

    When we run the previous ping -c4 google.com > /dev/null command, there is an exit status left in memory, where we can access the value of the exit status by looking at the variable $?. The $? variable will usually give a value of “0” if the previous command was successful and a value other than “1” if there were any issues. If our ping command did not encounter any errors (i.e., the device was able to ping and get a response from Google’s servers), then the value of $? would be “0”. However, if there were issues with our ping command (e.g., the device could not ping Google’s servers or Google’s servers did not respond back with our ping request), then the value of $? would not be “0”.

    Thus, we can set a conditional statement in bash, where we can write a set of commands only we encounter a problem with our ping command (value of $? is not 0). When we have ping issues, we can write a series of commands to (1) shut off the WiFi device, (2) wait 15 seconds, (3) turn on the WiFi device, (4) wait 15 seconds, and (5) reconfigure the network interface to ensure that we can reconnect to the network.

    sudo ip link set wlan0 down

    This command turns off our WiFi device (wlan0) via superuser privileges.

    sleep 15

    This command makes our device wait for 15 seconds until moving on to the next command.

    sudo ip link set wlan0 up

    This command turns back on our WiFi device via superuser privileges.

    sleep 15

    Just like the previous sleep command, our device will wait for 15 seconds until moving on to the next command.

    sudo dhcpcd

    Assuming we are reconnected to our WiFi network, we can use the dhcpcd command via superuser privileges to reconfigure the network interface (e.g., determining which IP address to use for the network) to ensure that we can interface with the network.

    That’s it! These following sets of commands, whenever we do not have access to Google’s servers, should successfully reset our WiFi network interface to reconnect to our designated network.

    The command echo "nothing wrong" is nested inside our conditional statement when we have successfully been able to ping google. I intentionally left this echo statement for logging purposes.

    Our script as a whole is the following:

    ping -c4 google.com > /dev/null
    
    if [ $? != 0 ]
    then
            sudo ip link set wlan0 down
            sleep 15
            sudo ip link set wlan0 up
            sleep 15
            sudo dhcpcd
    else
            echo "nothing wrong"
    fi
    

    We can press the “Control” key and “O” together to save the bash script, and we can press the “Control” key and “X” together to exit nano.

    Scheduling the WiFi checkup

    Now that we have our script, we can use a cronjob entry to run our script at regular intervals. Because we have some commands that require superuser privileges, we need to run cronjob via superuser privileges. To run our script via cronjob with superuser privilege, we can open crontab via superuser privileges.

    sudo crontab -e

    If this is your first time running crontab via superuser privileges, crontab may ask which text editor to use to edit the cron table. I pick nano because I am most familiar with this text editor. Once the crontab is open, we can add the following command at the end of the text file.

    */5 * * * * sudo bash /home/[username]/WiFichecker.sh

    The first five columns proceeding our command denotes time variables, where we can adjust to run the script to our liking. I want to run the script every five minutes, so I write */5 * * * * to indicate that the script should run every 5 minutes, of every hour, of every day, of every month, and of every weekday. Feel free to adjust this parameter to your liking.

    Next, sudo bash /home/[username]/WiFichecker.sh essentially executes our bash script (located at our home directory) via superuser privileges.

    Afterward, we can save our cron table (“Control” key and “O” on nano) and exit the text editor (“Control” key and “X on nano).

    Finally, we can either restart the Raspberry Pi (sudo reboot) or restart the cron service (sudo service cron restart) to apply the changes to our cron table.

    No more disconnected Raspberry Pis

    After implementing the WiFi checker and reconnection script, my Raspberry Pi never had a network disconnection issue. I hope this guide ensures that your headless Raspberry Pis always have a stable WiFi connection!

    Backing Up Video to the Cloud for Motioneye

    Background

    One of the first projects I did when I got my first Raspberry Pi was making a security camera for my house. After my Raspberry Pi Zero W Camera Pack came from adafruit, I hooked up the Raspberry Pi Zero W with the included camera module and installed motioneyeos to my Pi device. After logging on to the motioneye web interface on my browser, my new security camera was ready to record videos and photos based on motion detection. One of the major benefits of using a Raspberry Pi with motioneye installed is that I had a fully automated WiFi security camera that could upload captured media on major cloud storage providers (e.g., Google Drive) without any monthly fees!

    Although I liked using motioneye, I did not particularly like using motioneyeos, because the Linux distribution did not allow installing third-party programs via apt-get command. Motioneyeos was designed to be a single purpose distribution, where the operating system’s focus was on video surveillance and self-updating with minimal user intervention. Motioneyeos satisfies many people’s needs, including those who may not be familiar with working with the Linux operating system. As I become more accustomed to the Linux command line interface, I wanted to look beyond what motioneyeos has in store.

    Motioneye under Raspbian

    Taking out the SD card, I downloaded and installed Raspbian (now known as the Raspberry Pi OS) to my Raspberry Pi 3 B+ device and hooked my camera module to the more powerful Raspberry Pi device. I decided to use my Raspberry Pi 3 B+ instead of my Raspberry Pi Zero because I found that the Pi Zero was too slow to capture video at high resolutions. I also decided to use Raspbian instead of motioneyeos because I wanted to use Rclone to backup my video and photo files to my Google Drive.

    Long term storage on Google Drive with a small SD Card

    Even with a relatively large SD card (128 GB), I can easily accumulate enough motion-activated video footage to fill up the SD card within three to four days. I have a large cloud storage on Google Drive that could store a lot more videos than my SD card. How can I set up an automated system, where motioneye would only keep videos only for a few days, while backing up the video footage for longer term storage, such as keeping video files up to three weeks?

    I used a combination of motioneye, rclone, and cron job to get it done all automatically.

    Motioneye

    After installing Raspbian, I installed motioneye by following these steps. After logging into motioneye, I opened the settings tab and toggle opened both the “Still Images” and “Movies” section. Under “Preserve Pictures” and “Preserve Movies”, I set it to “For One Day.” Motioneye will only keep one day worth of media moving forward. We will use rclone to create long term storage on our Google Drive.

    rclone for interfacing with Google Drive

    Rclone is “a command line program to manage files on cloud storage.” rclone.org Rclone allows users to interface with various cloud storage providers, almost like an attached storage drive. Rclone can interface with Google Drive, so I followed the instructions on rclone’s Google Drive page to set up a Rclone remote drive interfaced with my Google Drive. I named my Google Drive remote as GoogleDrive:.

    Now, I want to backup all my motioneye captured media files on a folder within my Google Drive. So, I used the following command to create a folder named “motioneye” on the root of my Google Drive.

    rclone mkdir GoogleDrive:motioneye

    Granted, I could use Google Drive’s web interface to create the same folder on my browser, but I wanted to familiarize myself with all the features rclone has available for managing my cloud storage. To check whether the folder was created, I used the ls command on rclone,

    rclone ls GoogleDrive:

    where, among other files and folders, rclone would list our newly created folder “motioneye.”

    Placing all the commands together on a bash script

    Using my favorite Linux text editor (GNU nano), I wrote the following commands in order to a bash file “rcloneBackup.sh”.

    killall rclone;

    This command checks and kills any previously running rclone instance. Sometimes, due to network issues, rclone may run for a long time, and we may inadvertently launch another rclone instance, which may slow the backing up process even further. Thus, we make sure we only will have one rclone running moving forward.

    rclone delete GoogleDrive:motioneye --min-age 21d;

    Via rclone, we can selectively delete files on our remote drive’s motioneye folder based on the age of the file. Using the --min-age option, we can specify the minimum age of the files before rclone can go ahead and delete them. I set it to 21 days for my own personal preference. (You can adjust this parameter based on your own liking.)

    rclone rmdirs GoogleDrive:motioneye --min-age 22d --leave-root;

    If the previous delete command deleted files, thenrmdirs command deletes folders. If our previous file deletion command ran successfully, we would have empty folders starting after 21 days. Thus, I set the --min-age parameter to one day later from the previous command. The --leave-root option prevents rclone from attempting to delete the root folder “motioneye”.

    rclone cleanup GoogleDrive:;

    If you delete files on Google Drive via rclone, the deleted files end up on Google Drive’s Trash. To prevent Drive’s Trash from accumulating with deleted files (and fill up the storage quota for the Google account), I use this command to tell rclone to empty Drive’s Trash.

    (sleep 230m && killall -9 rclone) & (rclone copy /var/lib/motioneye/ GoogleDrive:motioneye --exclude "*.thumb" --transfers=1 --vv);

    There are two concurrent sets of commands going on. The first half of the code, (sleep 230m && killall rclone), acts as a timer to wait 230 minutes until our rclone instance gets terminated. We add this timer in to make sure that we do not have concurrent instances of rclones running on top of each other when we decide to run these series of commands again after 230 minutes.

    The second half of the code, (rclone copy /var/lib/motioneye/ GoogleDrive:motioneye --exclude "*.thumb" --transfers=1 --vv);, copies all the files where we have our recorded media from motioneye (at the default folder path of /var/lib/motioneye/) to our rclone remote drive’s folder. I used the --exclude parameter to exclude any *.thumb files from being transferred because those files are merely thumbnails of each of the media files which we don’t necessarily need for cloud storage. I also used the --transfers parameter to limit my transfers to one file at a time, because I found that having concurrent transfers with exceedingly large files tend to not upload on time within our 230 minutes window. Finally, I added the --vv parameter to force rclone to report every progress to our command line console.

    That’s it! Placing all of the commands in sequence together, we have the following series of commands for our bash script “rcloneBackup.sh”.

    killall rclone;
    rclone delete GoogleDrive:motioneye --min-age 21d;
    rclone rmdirs GoogleDrive:motioneye --min-age 22d --leave-root;
    rclone cleanup GoogleDrive:;
    (sleep 230m && killall -9 rclone) & (rclone copy /var/lib/motioneye/ GoogleDrive:motioneye --exclude "*.thumb" --transfers=1 --vv);
    

    I saved our bash script to our home directory, which is usually located at /home/[username]/. Afterward, I tested the script to make sure that all the commands are working.

    bash rcloneBackup.sh
    

    Crontab to regularly run the backup script

    After verifying that our bash script is working, we now have to crontab to run our bash script at regular intervals. If you are not familiar with cron, please check out a short guide by the Raspberry Pi Foundation. We can start by opening crontab.

    crontab -e

    If this is the first time running crontab, the system will ask you to pick a text editor to modify our crontab (also known as cron table). I usually pick nano, because that is the text editor that I am most familiar with editing text files in Linux.

    Next, I navigate to the end of the crontab text file to enter our crontab entry. If you recall, we set a timer of 230 minutes before terminating the rclone instance. The reason for picking 230 minutes was to ensure that we could run our bash script again every 240 minutes or 4 hours. (Feel free to modify those time values.) Assuming that I wanted to schedule our bash script to run every 4 hours, I added the following line to our crontab.

    0 */4 * * * bash "/home/[username]/rcloneCopy.sh"

    The first column represents “minutes” on crontab, and placing 0 on the first column tells cronjob that we want to run the script when the minute is at 0. (e.g., 1:00 AM, 2:00 PM, 3:00 AM; but not 1:01 AM, 2:02 PM, or 3:03 AM) The second column represents “hour”, and writing */4 tells cronjob to run the script every four hours after 12:00 AM. The third column represents days, the fourth column represents months, and the fifth column represents weekday. I placed a * on the third, fourth, and fifth column because I want to run the script every day, month, and weekday.

    On the second part of the script, bash "/home/[username]/rcloneCopy.sh, I wrote the bash command for cronjob to run every four hours. This command runs our prewritten bash script from earlier.

    After making sure that there aren’t any errors in our crontab entry, I save our modified crontab and exit our text editor.

    To make sure that our new crontab entry is loaded to our cronjob, I restarted the cron service.

    sudo service cron restart

    Await results

    That’s it! Every four hours, our Raspberry Pi will upload all of our captured media to our Google Drive and save it for three weeks. Fortunately, motioneye will delete all the locally captured media files after one day, so our SD card would not get filled up to capacity.

    You can now navigate to your Google Drive motioneye folder to view your captured media using your web browser from anywhere you have an internet connection. I hope this write-up helps you manage your limited SD card storage on your Raspberry Pi while having a much larger archive of captured media files in the cloud!

    Disable Caching on Chrome (temporarily)

    While working on my Hugo template, I noticed that my CSS files were not updating on my Chrome browser. Initially, I thought the CSS file was not updating on my server because the CSS file did not change when I accessed the file directly through my Chrome browser.

    After I checked the CSS file via FTPing the server, I noticed that the CSS files has been updating on the server-side. I then knew that the issue was with my Chrome browser.

    Browser caching saves resources (especially bandwidth) both on the browser side and on the server-side. Usually, browser caching does not hinder the user’s browsing experience. However, the browser caching feature in Chrome was interfering with my CSS development because I could not see my most recent CSS changes reflected on my website.

    It seems Chrome likes to cache CSS files, most likely because CSS files usually do not change often. After some quick Google searches, I was able to find a quick solution to stop browser caching on Chrome temporarily.

    1. Open DevTools either by (1) right-clicking anywhere on the webpage, and click Inspect; or (2) pressing F12 key on the keyboard.
    2. Open Settings either by (1) left-clicking the settings icon on the top right-hand side of the newly open window; or (2) pressing F1 key on the keyboard.
    3. Scroll down until you see Disable cache (while DevTools is open). Make sure that this option is checked.

    After following these steps, Chrome will not explicitly do browser caching while DevTools is open.

    Citation

    Google chrome css doesn’t update unless clear cache, Stack Exchange (Nov. 30, 2013), https://stackoverflow.com/questions/20300400/google-chrome-css-doesnt-update-unless-clear-cache.

    Setting Up Hugo

    In general, writing stuff down helps you to retain information. Also, hanging out in third-party social media websites affects productivity, creativity, and sanity.

    These two seemingly random thoughts are the reason why I decided to spend a few hours learning about website hosting and all the self-host options that could give me a clean and undisturbed space to collect my thoughts and write down useful information that I know I would revisit later.

    Although I used Pelican to setup a static website, I decided to give Hugo a try. As always, learning about a new system always takes time in the beginning, but I felt that learning about Hugo–including the incredible template system and its flexible customization option–would be worth my time. So far, I was able to install Hugo on my VPS server easily, and I spent a couple of hours figuring out the template system.

    I found the following features quite attractive for Hugo:

    • Static website generation
    • Markdown support
    • Live “preview” server support
    • Active development community

    Static website generation

    In the past, I ran a lot of Content Management Systems (CMS) that used a combination of PHP and MYSQL to run a dynamic website. Although CMS has its place under certain circumstances, the constant maintenance it requires created a liability risk for the potential that hackers would attempt to hijack the site via unpatched security vulnerabilities.

    In that regard, static website generators are wonderful, because there are only static files. I don’t have to worry about constantly patching PHP and MYSQL services while making sure that my CMS is up-to-date. Also, I am mostly hosting static content on this website (text and media files), so I do not require any interactive features that are available on a dynamic CMS.

    Markdown support

    Before I found out about Markdown, I dreaded my time trying to convert my draft writings from a Word Processor (or a text editor) to an HTML compliant form. Yes, there are various WYSIWYG editors out there, but those editors tend to either (1) produce a lot of junk HTML code (e.g., empty HTML tags that only clutter the source code, and (2) produce a lot of metadata junk HTML code whenever I paste something from Microsoft Word.

    Hugo and other static website generators allow drafting and publishing text content via Markdown format. This allows me to draft my text in whatever interface I prefer (Microsoft Word) and paste it on my favorite text editor (Notepad++) with very light formatting, and I am done. I can spend less time on formatting content and spend more time on writing.

    Live “preview” server support

    There are a lot of competing static website generators out in the open source community, and I think people should try multiple solutions before investing in one program. One feature that caught my attention while using Hugo was a live “preview” server mode, where you can have Hugo continuously render the website in the background while you make changes within your Hugo work directory. The preview mode actually does not publish your website with the latest changes. Instead, if you are looking to make a substantive change on the backend, the “preview” server feature can allow you to see those changes without having to publish the site at a temporary location.

    I personally used this feature while working on my custom template, and it really saved me a lot of time! The command for starting up Hugo’s live “preview” server is:

    hugo server

    By default, the preview server will bind to “127.0.0.1,” which would make it inaccessible if you are remotely administrating the server. Thus, if you want to use the preview feature and view the preview changes remotely, try:

    hugo server --bind yourserverurl.com

    where yourserverurl.com is your server’s address. After running this command, you can browse to the preview server via port 1313. (example: yourserverurl.com:1313 on your browser)

    Active Development Community

    Finally, Hugo seems to have a very active development community with a lot of premade templates available and a robust community to help users and inform developers. I had some issues while developing my template, and I was able to get answers to most of my questions by searching around past community posts.

    Moving Forward

    I have most of my barebone template files done. I am still learning the ins and outs of Hugo, but I think I have a good online writing setup ready to go.

    What’s next? I have an unorganized collection of website links, codes, and some random thoughts that I saved on my Google Keep account. I will start memorializing (a legal term meaning “to do something that helps people to remember.” See memoralize - TransLegal) my collection data via posts on this website.

    As I mentioned at the beginning of this post, the main purpose of this website is to provide a clean and undisturbed space for myself to collect my thoughts and write down useful information that I know I would revisit later. My secondary purpose is to continue to practice good writing and share my knowledge and thoughts with the rest of the world.

    Thanks for visiting!