MSIS graduate program acceptance

I’ve been accepted into the Master of Science in Information Systems at the Foster School of Business at the University of Washington. It’s been a whirlwind of activity since this acceptance, as I’ve moved up to Seattle, gotten settled in, have gotten used to a new city and have been busy getting prepared as much as possible for the program. Fun and exciting stuff on many levels!

First of all, Seattle is a pretty new experience for me. Even though I lived in Porltand which is only a few hours away, I never made it up here for the six years that I lived there. Hard to believe, but true! But, I’m loving this city - it feels like nature is close at hand, with the water and mountains never far away, and the cultural options are world-class. Last week I attended both the opera Carmen and a Brahms concerto at the Symphony. Plus, the weather has been amazing, so it has been fun settling in and getting used to my new home )

Additionally, the program is going to be very interesting and is already pushing me to learn Python to get ready for machine learning, as well as blow off the dust off my cloud computing knowledge and skills. But it’s all really interesting things and I’m excited to be learning all this, so I’m excited to be pushed in all of these areas :)

A new adventure!

Building Lumenary for the Portland Winter Light Festival

This past weekend, I displayed my interactive art sculpture titled “Lumenary” at the 2019 Portland Winter Light Festival. It was an amazing experience to be part of the artist community showcasing their artwork, and to experience displaying my own art piece to the greater community.

Here’s Lumenary: a modern-day interpreation of an image created by a twelth-century mystic named Hildegard von Bingen:

In essence, Hildegard had a vision of the interconnected nature of people, animals, and the entire cosmos. She spoke her truth in an environment that did not always value her message. Lumenary is an homage to her strength as a person, and for the message that seeks to find the bridges that connect people, and also values the environment.

In this post I’ll share some of the highlights and milestones of building Lumenary.

Initially, I was intrigued by the story of Hildegard von Bingen, as described by the theologian Matthew Fox. In particular, I was drawn to an illuminated manuscript containing an image by Hildegard, which depicted the essential unity of the cosmos:

s Cosmological Vision.

I was intrigued by this image for a variety of reasons, but especially for how the image reminded me of Jung’s mandalas, the beautiful interplay of colors, and especially how the message of connectedness and unity was reflected in the imagery. In today’s society, oftentimes we are exposed to messages that stress the divisions amongst ourselves. Thus, this message of unity seemed especially relevant and timely.

A big part of Lumenary was the interactive aspect, which was generally provided by a raspberry pi microcontroller, as well as several Arduino microcontrollers. These tiny computers were attached to sensors, and through their programming allowed for responsive light interactivity via LED strips. With the Raspberry Pi’s built-in wifi capabilities, I built an application using using cloud platform and services, including serverless architecture, and such resources as a MongoDB database, AWS Lambda, API Gateway, Cognito, and more.

This was where most of the work got done: my laptop, a soldering machine,a third-hand helper for holding the small electronic pieces for soldering, the various microcontrollers, and of course, copious amount of coffee :)

I found that the most useful sensor was the HC-SR04, which uses ultrasonic waves to determine the proximity of an object. This type of feedback provided much of the interactivity of the interaction, changing lighting patterns as a person drew near.

Here’s a glimpse of hooking up the HC-SR04 ultrasonic sensor to the Arduino:

One of the microcontrollers that was used was Adafruit’s Trinket, which was useful because it was lightweight, and it’s circular shape mirrored that of the NeoPixel LED ring. Together, they provided a light source and interactivity towards the upper part of Lumenary. In addition, the NeoPixel ring provided the light source for a crystal hanging from it, providing moving “stars” on the canvas as the wind gently moved it in relation to the light.

Here’s a snapshot of doing the coding needed for the microcontrollers to control the LED lighting patterns:

Now, onto the physical layer of Lumenary!

First, the frame was created for holding the canvas (a cotton sheet) using 1/2” pine:

Then filling out the canvas. The goal was to create an art space that mirrored a traditional painting canvas and easel:

With the frame completed, the Raspberry Pi, a fish tank with a beta fish (to represent the Ocean element) and lighting were added in stages:

As the piece grew, elements were added to the canvas for representing an interpretation of Hildegard’s original vision. The Trinket and NeoPixel ring was added to the top of the canvas, El-Wire was added to the side to represent an organic aspect of nature/land. Bit by bit, pieces were added to the canvas.

Part of the imagery that was needed/wanted was to add the mystical elements, and that was provided by crystals. Light bulbs, with their embedded connotations of thought and creativity, were added, and as you can see, crystals were added to them as well.

One challenge was adding all the myriad wiring and electrical/electronic components without detracting from the overall aesthetic. Here was one junction that invited some creative approaches!

With the Festival happening in the dead of winter, I tried to protect the electronics as much as possible. I will need to do a better job on this part, as I lost some of the sensors when it started to rain hard on Friday night.

Transporting Lumenary to the Festival site was facilitated with copious amounts of Syran Wrap, and duct tape:

Here was the setup, having been moved under a shelter as there was a major storm in Portland - yes, snow and rain.

At night, Lumenary lived up to it’s name, and people were drawn to inspect the piece more closely.

Here’s Lumenary Lit Up At Night

A special thanks to my sister, Karen, who came to help me on the project! :)

In conclusion, it was a wonderful experience, and I had some great conversations with people who were intrigued by Hildegard. It was enrichening to feel like I was part of an important conversation, and hopefully provided a glimpse into an amazing historical figure.

Lumenary - an Interactive Cosmological PDX Festival of Lights Art Sculpture (Intro)

I’ve been accepted by Portland’s Winter Light Festival to display a conceptual art piece that I’ll be working on between now and February 7th, 2019.


The vision of the art piece is grandiose - to engage the conversation that Hildegard of Birgen created back in medieval times, when she used art to depict her personal visions of the cosmological unity of universe. Her emphasis on the holistic was innovative, and she used a variety of creative methods to share her vision - including “illuminated manuscripts”, or detailed paintings which embraced mandalas and depictions of profound signifcance to touch upon deeper meanings.

To me, both the person of Hildegard, her message, and her means of expression are really interesting, and this art installation will attempt to re-capture some of that vibrancy by using modern-day tools to create an “illuminated manuscript” that inspired and empowers viewers of today.

I’ll be exploring a variety of ways of doing so - using code to provide the smarts of the microcontrollers to interact with the audience and sensors, creating movment and variety of lighting. Also, using mixed-media to frame the context and to create a different form of communication. Together, the two will complement each other to create an experience that will (hopefully) engage and inspire those who view it. :)

Sessions in Express

In previous recent posts I described building out a MERN app with an emphasis on tying in the database with the server with a simple view using react. With that as a foundation, we can now start building in more advanced functionality, and one of the fundamental pieces of that, for an app that works with users and data, is sessions. This post reflects some of what I’ve learned in bringing in this funcitonality.

A really great tutorial describes that sessions are needed for applications because they create application state. That is, we need to have some way to have persistence of a user’s experience within the app so that we can serve up continuity. The way that I like to think of it is that it’s like going to see a therapist, and so you set up an appointment to have a session with the therapist. The session is going to involve notes so that what you covered during that session will be remembered. Who wants to go back to the therapist and start all over again each time?! That’s what the notes are for, to create that persistence. The notes, actually, are what the cookie does; it holds session data.

As that tutorial describes, there are a couple ways of storing session data (to continue the analogy, ways to store those notes from the therapy session): application memory (after the therapy session, the therapist burns the notes), in a cookie (this is sent to and fro the server to the client and the cookie actually hold the data - not as secure), in a memory cache (such as redis) wherein the cookie holds just the sssionId and the data is stored in another server running redis, and finally in a database (very similar to memory cache in the setup).

With the module express-session, we still use a cookie but it’s behind the scenes. As this modules’s docs note, we need to have q unique session value, which we can use via the uuid module. The documentation also notes the use of uid-safe for ID generation too.The session data is not saved in the cookie itself with this module, session data is saved server-side.

The sessionId will automatically be saved and sent in each client request to the server (inside the header).

We use a session store to hold data because otherwise, if the server is restarted or the client-side application is closed, then the data will be lost.

In my app, I attached a numbers key to the sessions object and updated that value with every subsequent request sent to the server. For me, it’s helpful to see how these things actually work, because the documentation can be pretty lofty and wordy! ;)

I used the ‘session-file=store’ module to create a local store instance file wherein the session data is saved. This is a baby-step for moving from cookie storage which will evaporate once the session is done. The session data is saved to that file.

What really drilled down for me was using Postman to send a GET request to the ‘/‘ route, and seeing the ‘set cookie’ within the header as part of the response. That same value, the sessionID, is what is the title of the session-file-store. So the session is created the first time, but subsequent GET requests to that route are within that session. So, on the client side we have the sessionId but all the data attached to that sessionId is stored on the server side, within that store. So, conceptually it makes sense how the cookie is providing continuity and an extra degree of security, and also how instead of saving the data to a file, it could instead be saved to the database or redis.

But that’s another post :)

Driftwood Meets Photoresistor

I’ve always thought the combination of metal with wood, or wood with light or glass, to be beautiful - I just love that interplay of different textures and feels.

I’m trying to take baby steps towards making artistic creations that I dream up, and taking that process as a chance to learn to listen to my inner critic, and graciously let those feelings go. In short, I’m trying to be easy with myself on this journey towards becoming an artist that makes the kinds of art that makes me bolt out of bed early in the morning eager to start working. :)

Anywho, I started off with this beautiful piece of driftwood that I found just north of Florence on the Oregon coast:

Then I dived into the electronics piece of things - here’s the schematic that I used, using a photoresistor, battery, resistors, and LED:

And then built a working prototype just using electrical tape and a breadboard. Hooray, it works!

Drilled a hole into the drfitwood the size of the LED:

And then added the elecronics to the base of the driftwood (I used my Dremel to create a cozy nook for any protruding parts)

And here’s it working when the lights are on :)

So, not too complicated in the big scale of things but it was fun to mix up the two mediums!

Server-Side Rendering Vs Client-Side

I created an application called ‘Community Quotes’ that allowed users to create, view, update, and delete quotes. This was using the MERN stack using CRUD functionality, and it first started with using the ejs template engine, and then later I switched to using react as the View.

Having successfully finished up that application, which you can see here, I’ve been thinking about the benefits of that shift, so thought I’d write my thoughts here.

The advantage to using the template engine, from the developer’s point of view, is that it’s pretty darned easy to spin up and use within the express world. Just add a few lines of middleware, create some views documents, and then you use the render method with attached variable passing data to said view. Pretty sweet. The biggest drawback, however, as I see it, is that those views are all created on the server side, so that the created template document (using ejs) is built on the server and then sent to the user via the http response object. Well and good, but there’s a price to pay in terms of latency for the user experience.

We especially don’t want to be going back and forth to the server every time that we want to make or receive changes to the data, and that’s going to be especially true with a mobile app as we are looking for snappy responsiveness!

Which is why harnessing the power of the computing power on the client side makes sense. That’s where react does shine, in terms of being able to handle event handlers and state without needing to check back to the server for minutae. Of course, the server will still need to be used for database connectivity, so I can see that there will need to be some back and forth to the server from the client side, but it certainly will not be to the same degree as pure server-side view rendering.

Deploy React + Node/Express to AWS part 2

Okay, the first post was getting too long so had to break it up! :)

So, I left off with having got nginx botted up and with everything being run off port 80. Now we are wanting to move away from the hard-coded simple Express server that I orginally build using vim on the EC2 instance, to being able to pull backend code using git to that instance.

To do that, I instantiated a repo for the backend code.

Back-end, here, is using an EC2 instance with express sitting on node.js acting as RESTful API. The EC2 instance is part of a VPC architecture for firewall security.

Previously, on the EC2 instance, I’d created an Express server, very simple, to test out the framework, but building code via the terminal is awkward so added a private key to the EC2 instance and the backend repo so that I can work within the github environment and simpy pull code into my EC2 instance.

Ahh, much better!

The next thing that I did was create another route within express that would supply json data. I then pushed that revised repo to VCS, and pulled it into my codebase within the EC2 instance. With that in place, I could call add that route within the EC2 URI and see the returned JSON data; voila, we have an API server in effect!

At this point, we have a React app running on the s3 bucket, and we have a node/express server app running on the EC2 instance, the latter returning JSON data when the appropriate route is called. Now, to link the two together.

Deploy React + Node/Express to AWS part 1

My goal is to deploy a react app using CRA on the client-side using s3, and using an EC2 instance for instantiating an express server running on node.js for an API server.

Instead of building a montolithic MERN app served by a server, which does server-side rendering for example, I’ve decided to separate the business logic of the API from the front-end design. There are a lot of advantages to such a microservices approach, including separation of deployment allows for speedier file access and lower latency, faster iteration, and simpler product logic.

S3 holds files, and so is great for file storage or for hosting a static website; For this project, I’ll be using s3 for the view provided by React. The advantage to using S3 is scalability, reliability, and speed. Having the CDN in front of the S3 will offer faster delivery for the users and cheaper costs.

The EC2 instance will be serving API server via node.js and express. This will be the Node API. This could have been handled by Elastic Beanstalk, but as I’m wanting to get my hands dirty with each of the MERN components, I’ll be building out the EC2 instance and then connecting the db manually. Then of course there is the option of going serverless using Lambda functions and the API Gateway, which is definitely a great option, but I really want to build that solid foundation using the MERN stack before taking advantage of more speciality tools available. If that makes sense :)

As noted in this blog, it’s advantageous to put a CDN via Cloudfront in front of the s3 for a variety of reasons.

Anywho, using CRA to create /client sub-folder within main app. Now, how to upload to s3 bucket? Simple, just use:

aws s3 cp build/ s3://<bucket name/> --recursive

That command will use the AWS CLI to upload the production build files from your local computer to the bucket. Just remember to set the bucket permissions to public accessible :) See this post for more info, as well as setting up CDN.

Now on to the back-end with EC2 setup. And before doing that, we need to set up the VPC! So, back to AWS and set up the VPC, creating one with the help of AWS docs. I used the biggest IPv4 CIDR block range of, with a public subnet of within AZ us-west-2a. Created a second (private) subnet of within AZ us-west-2b, which is where the EC2 instance will be instantiated. Attached an Internet Gateway to the VPC.

Actually, I decided to put the EC2 instance in the public subnet, so as to concentrate on the application itself.

So, now I SSHd into the instance, updated, and installed node.js. See AWS article. Current version = 8.11.4

Now, it’s time to follow another AWS doc, this time for installing node.js

Activated nvm and installed latest version 8.11.4, then created a simple express server with a single route rendering the inflamous ‘hello world’ :) In order to do that, I had to add an incoming route at port 3000, for test purposes. (it normally uses port 80, but that’s a restricted web port). In order to run the app on port 80, we’ll need to do something like use nginx, and maybe use PM2 to keep the app running on restart too. Good article on spinning up that express server here.

Now, to change http traffic to port 80, via this post.

Just discovered that the Linux dist AMI I used doesn’t support nginx, at least not easily, so time to delete and add Ubuntu. Sigh.

Okay, it’s all good practice! So, right now there’s an Ubuntu server running express on latest node version, running on port 3000. We’d like to switch this to port 80 so that we can use this public port and routing. That is, port 80 is the default port for HTTP traffic.

So, first things first! Nginx is great for being used as a router, so after system OS update, install nginx. This gets nginx to run automatically, so going to port 80 now shows that homepage. We are making progress!

Add config file in sites-enabled that forwards HTTP traffic from port 80 to port 3000. Now, after I shut down the server and restart it, I can use port 80 to render my express app. Woot!

Now, onwards to process manager, using PM2, to restart server.

Using Multer For Uploads

My goal is to create an application where users log in, are authenticated, and once that is completed, they can Create, Read, Update, and Delete todo tasks. I’d also like for the user to be able to upload an image, as part of that todo.

In order to do that, I included the multer module. Each file contains a variety of field keys: fieldname, originalname, encoding, mimetype, size, detination, filename, path, buffer. We can call multer, passing in a couple possible options: dest/storage, fileFilter, limits, preserverPath. In this case, I’ve created a storage option and a fileFilter option. Within each of those options, different configuration was needed.

For storage, diskStorage gives you full control on storing files to disk. You have the two options of destination and filename , the first of which I set to the folder called photo-sotrage. The second was a filename cosisting of the field input text with the current timestamp and the file extension; this ensures that it will be a unique file name.

For fileFilter, we can control which files should be uploaded. We do this by calling the third argument to the method call; fileFilter(req,file,cb){} in this case would be cb(true) if it’s a file we allow to upload. So, in the code, we look at the file type and if the first part of that type is ‘image’ then we call the method with the cb and a boolean (true) argument.