🎵 Streaming Service Wars

tl;dr Apple Music suffers from only one problem: iTunes. YouTube Music feels half-baked at best.

Once in a while I like to review my subscription choices to see if I am still getting the most value out of each service. Most recently I started paying for YouTube Premium as a way to legitimately skip ands and access exclusive creator content. One perk of this subscription is the inclusion of YouTube music. I have been a happy Spotify subscriber for many years now, but glitchy application behaviour and some missing catalogue items made me second guess whether I am getting the most out of that subscription. I decided to revisit Apple Music and YouTube Music to evaluate their service quality.

Apple Music feels like a quite solid option, with one huge caveat: iTunes. I am a Windows user for my primary desktop and laptop at home and iTunes is an evil that I avoid whenever possible. The Apple experience has continued to decay into near unusability on Windows. I know they have the online player so you can avoid using iTunes, but I wish they would bundle this clean experience into a distinct Windows application. Much of the legacy library and device management that iTunes provided isn’t necessary for my day-to-day music experience. Despite the iTunes nightmare, the experience on my mobile devices such as my iPad and iPhone are excellent.

While there was no direct way to port my Spotify playlists to Apple Music, there was a free online service, Tune My Music, which allowed me to move my Spotify playlists and favourites smoothly to Apple Music. Given these services are all mature at this point I was a little disappointed that this migration process wasn’t native to Spotify, Apple Music, or YouTube Music. Despite the desired locked-in effect, it’d be much easier to poach me as a customer if I could easily take my music library with me.

Fresh off of this mixed Apple experience, I was hoping that Google would have produced something high quality with YouTube Music. They spent months advertising the service to me as an upgrade to Google Music (which I had used prior to Spotify), and a great value-ad to the general YouTube experience. Immediately I noticed how rushed this product felt. Like Apple, they have no native Windows application (or Mac application for that matter), and their mobile application feels like a lazy reskin of the the YouTube application. This set the tone for the entire experience I was about to endure.

While most content providers such as Tidal, Apple, and Spotify seem to have gone through the trouble to license large libraries of music with many record labels, it seems like YouTube has done the bare minimum required to be called a music service. It’s almost like they were sitting around the board room one day and someone said “hey, people upload music to YouTube, YouTube can make playlists, why not charge for this?” And then proceeded to receive a standing ovation. The quality of the experience is extremely variable. Some songs are from people who just happened to have uploaded the content, some is label-uploaded content via VEVO and others. Then I attempted to import my playlists.

While the experience using Tune My Music was good with Apple, YouTube Music was a nightmare. First since there is no sane user-level way to import, the tool integrates with the YouTube API directly, and promptly hits the 10,000 action limit as it attempts to match and write playlists via the API. My collection still is only about 3/4 imported, having to curate every re-run to get another chunk of my library imported. Then the fun starts – I would say a solid 10% of my library is unavailable on YouTube. Either people haven’t uploaded the music, or Google never bothered to sign detailed agreements with the various labels.

So, after my experience with Apple Music and YouTube Music, I can comfortably say that Spotify is still the reigning champ in my books. Cross-platform applications on Windows and Mac, decent mobile apps, integration with Sonos and most voice assistants. I’ll live with the odd quirks and bugs, at least I can experience them on every platform consistently and cleanly.

Preparing a Portable Disk for macOS

I purchased a Western Digital external hard disk from Best Buy. On the shelf they had one for “Windows” and one for “MacOS” and the MacOS-compatible one was priced $20 higher than the Windows one. Marketing will not fool me today – time to reformat NTFS to a JHFS+ filesystem.

Immediately I plugged the disk into my MacBook Pro and opened Disk Utility. After erasing the NTFS partition, I attempted to create a new JHFS+ partition only to be met with the output “Erase process has failed, press done to continue.” Expanding the output displayed the error “Mediakit reports not enough space on device for requested operation.” Confused as I had removed all the existing partitions, I attempted to manually create the new partition as Macintosh Extended (Journaled). No matter the sizing or naming, partition creation would always fail.

After hunting around online, I finally reached a workable solution. I am working on MacOS High Sierra which enabled the newest APFS file system. If you wish to format your external with APFS, you will first need to format it as HFS+, then subsequently migrate it to APFS.

First you need to get the name of the disk you are trying to format. On my MacBook with High Sierra there were 2 existing system disks “disk0” for the recovery files and APFS container volume and “disk1” which is a synthesized set of APFS volumes within the container. As such, the external hard disk appeared as “disk2”, but this may vary on your system depending on what you have mounted.

diskutil list

Once you’ve identified your disk, unmount it.

diskutil unmountDisk force disk2

Once this completes, you will want to overwrite the boot sector for the external.

sudo dd if=/dev/zero of=/dev/disk2 bs=1024 count=1024

Lastly, you will want to partition the disk as JHFS+, including the name for the new volume.

diskutil partitionDisk disk2 GPT JHFS+ "My Passport" 0g

The magic here is removing the boot sector (also called the MBR). The MBR of a disk manages both boot information and the partition table. If that table is unreadable or corrupt, it can render partitions unmanageable. By zeroing out the boot sector of the disk it forces MacOS to create a new GUID partition scheme that it can manage.

Here is the output of the entire formatting operation as run on my system.


pearce at Deans-MacBook-Pro in ~/Projects
$ diskutil list
/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
   2:                 Apple_APFS Container disk1         250.8 GB   disk0s2

/dev/disk1 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +250.8 GB   disk1
                                 Physical Store disk0s2
   1:                APFS Volume Macintosh HD            116.0 GB   disk1s1
   2:                APFS Volume Preboot                 19.8 MB    disk1s2
   3:                APFS Volume Recovery                509.8 MB   disk1s3
   4:                APFS Volume VM                      2.1 GB     disk1s4

/dev/disk2 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk2
   1:                  Apple_HFS                         1.0 TB     disk2s1


pearce at Deans-MacBook-Pro in ~/Projects
$ diskutil unmountDisk force disk2
Forced unmount of all volumes on disk2 was successful

pearce at Deans-MacBook-Pro in ~/Projects
$ sudo dd if=/dev/zero of=/dev/disk2 bs=1024 count=1024
Password:
1024+0 records in
1024+0 records out
1048576 bytes transferred in 0.491402 secs (2133846 bytes/sec)

pearce at Deans-MacBook-Pro in ~/Projects
$ diskutil partitionDisk disk2 GPT JHFS+ "My Passport" 0g
Started partitioning on disk2
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Formatting disk2s2 as Mac OS Extended (Journaled) with name My Passport
Initialized /dev/rdisk2s2 as a 931 GB case-insensitive HFS Plus volume with 
a 81920k journal
Mounting disk
Finished partitioning on disk2
/dev/disk2 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk2
   1:                        EFI EFI                     209.7 MB   disk2s1
   2:                  Apple_HFS My Passport             999.8 GB   disk2s2

Building DBD::Oracle on MacOS

As you may know, DBD::Oracle is one of the most challenging DB drivers to build and install. I recently switched to MacOS Sierra and found myself needing to install DBD::Oracle within my Perlbrew installed local Perl. I opted to use the latest Instant Client from Oracle (12.1) and the latest stable DBD::Oracle build (1.74) for my system.

Install Oracle Instant Client 12.1 Packages

The easiest way to get DBD::Oracle built is against the Instant Client. Download the following 3 packages (or the corresponding 3 for the Client version you desire) to your system, and extract them to a folder on your system. I opted to keep my install local within my /Users directory, but you can opt to install it to /Library/Oracle as well. Simply extract all 3 zips to the same path in your system.

Build DBD::Oracle 1.74 Module

On MacOS Sierra I had to prepare the following steps to ensure a successful build. You may need to adjust the steps based on the location of your Instant Client. The primary issue is the fact that MacOS Sierra does not like the dynamic path linking using @rpath. A fix is to just set your full path into the binary.

# setup the Oracle Instant Client environment
export ORACLE_HOME=/Library/Oracle/instantclient_12_1
export DYLD_LIBRARY_PATH=$ORACLE_HOME

# download and unpack DBD::Oracle
cd /tmp
wget http://search.cpan.org/CPAN/authors/id/P/PY/PYTHIAN/DBD-Oracle-1.74.tar.gz
tar -xzf DBD-Oracle-1.74.tar.gz
cd ./DBD-Oracle-1.74/

# build the module against 12.1
perl Makefile.PL
make

# fix for problem with dynamic linking on MacOS Sierra
install_name_tool -change @rpath/libclntsh.dylib.12.1 \
    /Library/Oracle/instant_client_12_1/libclntsh.dylib.12.1 \
    ./blib/arch/auto/DBD/Oracle/Oracle.bundle

# complete the install
make install

Post-Install Tips

After installing, you may want to do a few things to make your experience easier.

  1. Add permanent paths to your .bash_profile
    • export ORACLE_HOME=/Library/Oracle/instantclient_12_1
    • export DYLD_LIBRARY_PATH=$ORACLE_HOME
    • export PATH=$ORACLE_HOME:$PATH
  2. Add a tnsnames.ora for easier configuration
    • cd $ORACLE_HOME
    • mkdir -p network/ADMIN && cd network/ADMIN
    • touch tnsnames.ora

On Software Quality

There are many metrics by which quality can be measured. We can take quantitative as well as qualitative approaches when evaluating systems for quality. As a software developer, I am particularly interested in how to effectively measure code quality. Quality is something that cannot be retroactively applied or evaluated, rather it must be designed into a system. As software developers, we have a responsibility in ensuring our code is written with quality in mind. Let’s explore some practical ways to improve code quality while maintaining velocity and reducing overhead work.

Write Readable Code

The key to writing high quality software is to ensure that any code you write can be read by another software developer at a later date. Unlike traditional engineering where finalized designs are rarely modified, software by it’s nature is intended to be re-written, improved, extended, and adapted. There is little or no performance cost when formatting your code for readability and providing comments for future developers. Another sure-fire way to improve code quality is to ensure that code reviews are built into your process. Constructive peer feedback allows developers to improve their own coding standards and habits. Implemented properly, it helps strengthen teams by allowing for open and honest conversation without fear of repercussion.

Technically there are several tools and practices that should be used by almost every team to manage a baseline for code quality. The first to enforce a coding standard across your project, and I don’t just mean spaces versus tabs. Things like brace placement (K&R style and the like), naming conventions, and deciding what work should be allowed in constructors. There are many existing style guides such as Google’s Style Guides which can be used as a base for your own style guides. Establishing a standard and a reference in-house also allows some of this work to be automated via critic and linting tools which can automatically format code and point out rules which may have been violated. Beyond code style there are best practices to follow for each language, and for each development style. An example of this would be in object oriented programming, regardless of language or style, concepts like classes having a single concern and subsystem architectures are solid foundations on which to build.

When it comes to personal experience, I’ve learned over my career that privacy and simplicity may require additional thought now, but definitely pays of later. Concepts like ensuring all loops (except main application loops) are properly bounded. There is almost no case for using unbounded while loops or goto statements. Simply by bounding your application entire classes of errors can be eliminated. Second, complex methods should always be broken down into smaller private methods. I don’t stick to the one-page rule-of-thumb, but I do recommend ensuring that your function is only responsible for one piece of logic. Complex methods that change behaviour based on input should likely be different methods and the caller should be responsible for making that distinction. Ensure that any variables and methods that don’t need to be exposed as public are explicitly marked as private or protected. It is easier to grant access to data than it is to revoke it. Verifying and tracing a system is much simpler when classes have a single concern and marking unnecessary variables and methods as private.

Keep Safety and Privacy In Mind

Developers must also consider the safety of their code, regardless of where it is running. Frequently drivers, services, and applications are compromised as a way to obtain access to restricted data. Modern systems are frequently written as online systems, exposing a rather large attack surface through things such as API endpoints, micro services, and web services. Frequently attackers will go after these for weak authentication (hard-coded credentials, default administrator accounts for various tools), SQL injection (not properly escaping or binding parameters), and out-of-date packages (outdated copies of WordPress, unpatched OpenSSL libraries). Within code, it’s worth ensuring that all external data is sanitized before being used, and that we don’t have access to more information than expected. For example, when writing an Oracle SQL statement, it’s safer to use binding than direct string injection. When you write your query, you can use placeholder binding like this: select * from foo where bar = ?. When you go to execute, you would specify your placeholder variables. This would ensure that the data is properly encoded by your driver and prevent one of the most common attack vectors. Even more subtle things, like auto-generated API endpoints that unintentionally expose variables because they are marked public instead of private. It is also important to avoid logging sensitive data as logs can be compromised as they often have more relaxed permissions on who can read them. Ensure things like session keys and passwords are not logged by your application.

Another paradigm of safety is ensuring that your classes are properly scoped. When other subsystems and services consume your code, they are encouraged to use anything marked as public or friendly. This can lead to unexpected behaviour if you unintentionally leave variables and methods public. When writing tight units of code, it is important to only mark methods you intend to be consumable as public. These include get/set methods, constructors, and action methods on your class. If you need to enable extensible code, consider offering interfaces which can be implemented, or base classes which can be extended. This ensures that your code retains control over key attributes and structure.

Cover Edge Cases

One key thing missed by many developers when coding are edge cases. While the happy path through the application is frequently well covered, unexpected or improperly formatted input is frequently missed. When writing methods, particularly those that interface with other subsystems, it’s worth ensuring that they are appropriately hardened by considering all potential input. This is accomplished via equivalence class partitioning which allows input to be partitioned into classes of equivalent data. Ideally you would then write one test case for each partition to cover the broad spectrum of input. For example, if a method accepts integers we would want to test for values at the minimum and maximum integer value marks. For strings we would want to test null, zero, one, and and maximum length strings to ensure full coverage. For common objects this could likely be generalized to a pattern since the class partitions would be common to the type. Complex input, such as objects more advanced partitions would need to be devised. When edge cases are covered and tested properly, we can be confident that our code is robust and correct.

Exceptions are also frequently mishandled in code. Methods that can throw exceptions aren’t trapped at the appropriate level, or worse they aren’t even correctly propagated upwards. This can lead to unstable or incorrect application behaviour. When writing code it is important to consider how exceptions should be handled. In some cases we may only want to handle a subset of errors. For example a configuration class may try to read a configuration file from disk and get a java.io.FileNotFoundException which we can handle gracefully by using application defaults rather than custom values from a file on disk. In another situation we may find a file on disk and try to parse it only to get an oracle.xml.parser.v2.XMLParseException which we may want to pass up to the GUI to let the user know their custom configuration file is corrupt. When writing exception classes, it’s worth being verbose so that other classes can decide if an exception should be caught or passed upwards. Well structured and handled exceptions can make entire subsystems more robust and facilitate integration with other system components.

Targeted Documentation

This is a fairly controversial topic. Few people like writing detailed documentation despite many developers wishing they had access to cleaner and clearer documentation. Some sources say that the number of lines of documentation should match lines of code. Others say that the code should serve as documentation. My preferred method is to write concise documentation only as needed. If you have followed the previous guidelines for code quality then documentation should act only to clarify complex logic and document API methods within classes and libraries. Examples of this would include covering a basic description for every public method and constructor, and commenting complex regular expressions. When writing, try to stick to unambiguous technical language and use consistent terminology throughout, especially when referring custom components or business-related entities.