Jan Carlo Viray Senior Front-End Engineer · React · Node · AWS · Postgres · Docker

LinkedIn ·  Twitter ·  Github

A year from now you’ll wish you started today — Karen Lamb

How to Merge Multiple Commits into one Git Commit?

To do this, we use the command git rebase. Typically, it is used to:

  • Edit previous commit messages
  • Combine multiple commits into one
  • Delete or revert commits that are no longer necessary

Let’s work through an example.

Let’s say we already have an existing repository with a lot of commits. First, check your commit log:

git log --oneline

Now, let’s say we want to merge last 4 commits. Run git rebase with -i which means interactive and HEAD~4 which means to look at last 4 commits

git rebase -i HEAD~4

Something like this should show in your editor:

pick 43432432 my commit message to preserve
pick 43132132 my other commit message
pick 12353434 some commit message
pick 64554234 update something

If you’d like to squash all commits into “my commit message to preserve”, then change it into this:

pick 43432432 my commit message to preserve
f 43132132 my other commit message
f 12353434 some commit message
f 64554234 update something

Make sure you read the instructions git added as comments in your editor. Once satisfied, push to remote repository:

git pull origin master
git push origin master

Done? Yes, but a piece of advice here: this is considered a bad practice as it squashes commits and makes it difficult for everyone else that are using the repository. Do this at your own risk.

Read More

Should Table Names be Plural or Singular? What about Column Names?

Naming tables, columns, variables, functions, etc is an activity that I tend to have to pause sometimes. I typically think about the future of the app, some “what ifs”, conventions and if it truly gives a good context for other developers or users. Throughout my career, I notice that in the end, as long as everyone involved in the project is consistent and better yet, have things documented, then that typically outweighs hardlined rules.

With regards to database table and column names, I lean towards a certain convention:

  • A database table is a set, and every row is an object. If you were making an array, wouldn’t you pluralize your variable name? This is why I believe table names should be plural
  • A table’s column is an element. In a sense, it is a property name of an object that contains a scalar data element (unless you’re using arrays in postgres). If you were creating a class with properties that contain basic elements (string, int, bool, etc), wouldn’t you make your property name singular? This is why it makes more sense that column names should be singular unless you’re using arrays.
Read More

Should Business Logic Be in Database? Pros and Cons

This debate has been labeled as the “Vietnam of Computer Engineering” and deservingly so. Here are some pros and cons I have learned through experience so far.


  • centralized business logic
  • be independent from application language.. node vs golang vs php vs ruby will not be an issue
  • compared to applications, databases are less likely to need major refactorings
  • it is often more performant to have business logic closer to the metal
  • stored procedures can reduce network traffic since you avoid multiple requests


  • database vendor lock-in, especially that major databases expand upon SQL standards
  • it is much more difficult and expensive to scale database layer horizontally
  • source control is more difficult to do properly with stored procedures
  • very difficult for code-reuse


I believe that core business logic should live in the layer that is most scalable, testable, debuggable and versionable. Putting core business logic in the database makes it very difficult, and very expensive to fulfill those requirements.

Read More

Postgres Quick Start and Best Practices

Want to add or change something? Feel free to create a pull request. I hope this helps!

Create a Postgres Docker Container

Want to test something quick? Install Docker and run these commands!

# get latest image and create a container
docker pull postgres
docker run --name pg -d postgres

# invoke a shell in the container to enter
docker exec -it pg bash

# now that you're inside the container, get inside postgres
# by switching to "postgres" user and running `psql`
su - postgres -c psql

# enjoy!


Install Postgres (latest, 9.6)

This is for Ubuntu/Debian distribution. For other versions, read this.

# update system and get some common tools
apt-get update
apt-get install -y software-properties-common wget sudo

# add repo based on Ubuntu version - `cat /etc/lsb_release`
sudo add-apt-repository "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main"

# get keys
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -

# update to include added repo and install postgres
sudo apt-get update
sudo apt-get install postgresql-9.6

# start the postgres server and enable autostart
service postgresql start
systemctl enable postgresql

Postgres is set up to peer auth by default, associating roles with a matching system acct. This is why you need to login as a specific user before you can psql. The installation also created a system user, postgres. Check out /etc/passwd.

# signin to "postgres" system acct
sudo su - postgres

# this command by default, is equivalent to:
# psql -U current_system_user \
#      -d current_system_user_as_db_name

Intro to Postgres Configuration

Let’s enter through the starting point and work our way in. Make sure postgres is running first with service postgresql start

# check if the process is running
ps aux | grep postgres | grep -- -D

# /usr/lib/postgresql/9.6/bin/postgres -D /var/lib/postgresql/9.6/main -c config_file=/etc/postgresql/9.6/main/postgresql.conf

postgres starts the server, -D points where the data will live, and -c points to the main configuration file it will use. The main configuration file is postgresql.conf.

Open the file, less /etc/postgresql/9.6/main/postgresql.conf and check out the section called “FILE LOCATIONS”

Server Configuration

config_file = '/etc/postgresql/9.6/main/postgresql.conf'

The main server config where you can tune: performance, change connection settings, security and authentication settings, ssl, memory consumption, replication, query planning, error reporting and logging and etc.

Client Authentication

hba_file = '/etc/postgresql/9.6/main/pg_hba.conf'

This file is stored in the database cluster’s data directory. HBA stands for host-based authentication. This is where you set rules on who or what can connect to the server.

Fields include: Connection Type, Database Name, User Name, Address, Authentication Method

The first record with a matching connection type, client address, requested database, and user name is used to perform authentication.

There is no “fall-through” - if one record is chosen and the authentication fails, subsequent records are not considered. If no record matches, access is denied.

local   database    user    auth-method   [auth-opts]
host    database    user    address       auth-method     [auth-opts]
# ...

# allow any user on the local system to connect to any database
local   all         all                   trust

# allow any user from any host with specified ip to connect
# to database "postgres" as the same user name that ident reports
host    postgres    all   ident

# allow any user from host ip to
# connect to db "postgres" if pass is valid
host    postgres    all  md5

# allow any user from hosts in the example.com domain if pass is valid
host    all         all     .example.com      md5

Connection Types:

local record matches connection attemps using unix-domain sockets, which are inter-process communication on the same host operating system. Without a record of this type, unix-domain socket connections are disallowed.

host record matches connection attempts made using TCP/IP. Note that this will also not work if it’s not given an appropriate listen_addressess configuration parameter since the default for this is only on the local loopback address localhost.

Check out documentation for more types.


Specifies which db names this record matches; value of all specifies that it matches all. sameuser specifies if database name is the same as the user.


Specifies which database user name(s) this record matches. The value all specifies that it matches all users.


Specifies the client machine address(es) that this record matches.

Auth Methods:

trust assumes that anyone who can connect to the server is authorized to access the database. This is appropriate for single-user workstation, but not on multi-user machines.

password (cleartext) and md5 if you want to authenticate by text. Note that if no password has been setup for a user, the stored password is null and authentication will always fail

peer works by obtaining the client’s OS system user name from the kernel and uses it as the allowed database user name

User Name Mapping

ident_file = '/etc/postgresql/9.6/main/pg_ident.conf'

This maps external user names to their corresponding PostgreSQL user names. General form of setting is: mapname sys-name pg-name. To use user name mapping, change map=map-name setting in pg_hba.conf. Here are some examples and scenarios of mapping:

mymap       brian             brian
mymap       jane              jane

# "rob" has postgres role "bob"
mymap       rob               bob

# "brian" can use roles "bob" and "guest1"
mymap       brian             bob
mymap       brian             guest1


Path to additional PID:

external_pid_file = '/var/run/postgresql/9.6-main.pid'

Data storage location:

data_directory = '/var/lib/postgresql/9.6/main'

Quick Start Overview

Nice Helpers for psql

Add these settings to your ~/.psqlrc file:

  • \set COMP_KEYWORD_CASE upper to auto-complete keywords in CAPS
  • \pset null ¤ to render NULL as ¤ instead
  • \x [on|off|auto] for expanded output (default is “off”)
  • \timing [on|off] to toggle timing of commands - great for benchmarks

Add a System User

sudo adduser some_user

Add a Postgres User

Inside Postgres prompt, create a new Postgres user with the same name as the user we created earlier, “some_user”.

sudo su - postgres

Create a Database for Postgres User

CREATE DATABASE my_postgres_db;

-- associate to some_user
GRANT ALL ON DATABASE my_postgres_db TO some_user;

Exit the prompt \q.

# Log into the user you created
sudo su - some_user

# connect to the database you created
psql my_postgres_db

Add a Table

-- create a table
  equip_id serial PRIMARY KEY,
  type VARCHAR(50) NOT NULL,
  color VARCHAR(25) NOT NULL,
  location VARCHAR(25)
    CHECK (
      location IN ('north', 'south', 'west', 'east')
  install_date DATE

Modify a Table

-- add column
ALTER TABLE items ADD COLUMN functioning bool;

-- alter column
ALTER TABLE items ALTER COLUMN functioning SET DEFAULT 'true';
ALTER TABLE items RENAME COLUMN functioning TO working_order;

-- remove column
ALTER TABLE items DROP COLUMN working_order;

-- rename entire table
ALTER TABLE items RENAME TO playground_equip;

-- drop table
DROP TABLE IF EXISTS playground_equip;

Exit the postgres prompt \q

Import a Database

# log into default "postgres" user
sudo su - postgres

# Download sample database.
wget http://pgfoundry.org/frs/download.php/527/world-1.0.tar.gz

# extract archive and change to content directory
tar xzvf world-1.0.tar.gz
cd dbsamples-0.1/world

# create database to import the file structure
createdb -T template0 worlddb

# import sql
psql worlddb < world.sql

# log into database
psql worlddb


-- `\dt+` to see list of tables in this database
-- `\d city` to see columns, constraints, indexes, etc

-- select
SELECT name,continent FROM country;

Order By

-- order by
SELECT name,continent FROM country ORDER BY continent, name;


-- filter
SELECT name FROM city WHERE cc = 'USA';
SELECT name FROM city WHERE cc = 'USA' AND name LIKE 'N%';
SELECT name FROM city WHERE cc = 'USA' AND name LIKE 'N%' ORDER BY name;


-- join
  country.NAME AS country,
  city.NAME AS capital,
FROM country
JOIN city
  ON country.capital = city.id
ORDER BY continent, country;

Let’s work with JSON

-- create table with a json column
CREATE TABLE products (
  id serial PRIMARY KEY,
  name varchar,
  attributes JSONB

-- insert some data
INSERT INTO products (name, attributes) VALUES (
 'Geek Love: A Novel', '{
    "author": "Katherine Dunn",
    "pages": 368,
    "category": "fiction"}'

-- create an index
CREATE INDEX idx_products_attributes ON products USING GIN(attributes);

-- query an attribute
SELECT attributes->'category' FROM products;

-- extract query as text
SELECT attributes->>'category' FROM products;
Read More

Find Files, Texts, Processes in Linux

I’m building a “cheat sheet” on finding files, monitoring and everything related to that in Linux. I hope this helps!

Recursively Find Files Containing a Specific Text

grep -rn "pattern" /path/to/dir

# include only certain files
grep -rn "pattern" --include="*.js" --exclude="*node_modules/*" --exclude="*.min.js*" /path/to/dir
  • add -w to match whole words instead of partial
  • add -i for case insensitive search

How to Find Files in Linux

find /path/to/dir -name "*.js"

# execute a command on those files
# "{} \;" just means the command ends
find /path/to/dir -name "*.js" -exec rm -f {} \;
find /path/to/dir -name "*.js" -exec chmod 700 {} \;

The find command is very powerful and can do additional things like:

  • finding files with certain permissions find . -type f -perm 0664
  • finding files belonging to a specific user find . -user joe
  • finding files modified N days back find . -mtime 50
  • finding files of given size find . -size 50M

Find If Your Process Is Running

ps aux | grep postgres
Read More

Better Git Commit Titles

For almost a year, I’ve been writing git commits in a more structured way. It has improved code reviews and skimming through code history. Inspired by angular commit messages, I adopted their commit message guidelines. Here’s an example:

fix(release): need to depend on latest rxjs and zone.js docs(changelog): update change log to beta.5

Instantly, you notice a big and small picture based on those commit titles. With this, you can immediately parse out a type and scope, which immediately preps you a context of the code instead of skimming through the files and code changes in order to understand what’s going on. Try it out in your workflow and let me know how it works for you and your team! Here’s an excerpt of some core types:

  • feat: A new feature
  • fix: A bug fix
  • docs: Documentation only changes
  • style: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
  • refactor: A code change that neither fixes a bug nor adds a feature …
Read More

How to update all Docker images

Currently, Docker does not have a command to do this so we will have to do some good old fashioned command piping. To automatically update all images:

docker images | grep -v REPOSITORY | awk '{print $1}' | xargs -L1 docker pull

Docker does not overwrite old images for us. To cleanup old images:

Read More

Common Linux Commands

Expand this blog topic if you’d like to see the text version or read more.


How to extract a tar file?

tar xzf file.tar.gz

How to compress a file?

tar czf zipped.tar.gz unzipped.pdf

Creating Jobs/Services

How to run a script in the background?

Append & at the end of the command. For example, tail -f /var/log/syslog &.

Note that when you exit the shell, the process will also terminate with a hangup signal (kill -SIGHUP [pid]). This means that if you’re ssh’d to a server, you run a process and put it in a background and you exit the server.. the process will also then terminate.

How to run a script in the background without getting it terminated on shell exit?

Append nohup in your command, which means “no hangup”. It’s a poor man’s way of running a process as a daemon. Use this only on processes that will take some time, but will not hang around too long.

Read More

How to Find Files Containing Specific Text

How to find files containing a specific text recursively in Unix / Linux system? I typically use grep like this:

grep -rnw "pattern" /path/to/file
Read More

Companies Using Docker

Going with the flow of “companies using [insert tech]”, here are the companies using Docker. Big companies with the need for scalable and distributed systems seem to have adopted docker. Though there are many critics talking about Docker not being production-ready, I am impressed to find that a global financial service corportation such as ING using Docker containers to drive 500 deployments a week. In addition, Goldman Sachs, a global investment bank uses Docker containers to centralize application builds and deployments. For all the critics out there, here’s a myth busting article revolving around Docker’s supposed “flaws”. Check out the list below as well.

Read More

Companies Using Angular

To be fair, it would be nice to see my other favorite stack have an updated list of companies using it. Here’s the list of companies using AngularJS, in order of recently found to older

Read More

Companies Using React

I am quite impressed at the adoption of React.js (it is not 1.0 yet!). From my search and experience working with both Angular and React, as well as networking within the field, I personally see that (with exceptions) Angular.js is the de-facto framework chosen by early startups while React.js is the de-facto framework chosen by more established companies/startups. Personally, I love both. I find Angular.js more fun to work with (at the cost of occasional frustrations), while I find React.js more peaceful and less stressful framework to work with, though at the cost of more typing and boilerplate code to write

Read More

React Props in State an Anti-Pattern

Passing the initial state to a component as a prop is an anti-pattern because the getInitialState method is only called the first time the component renders - and never called after that. This means that if you re-render the parent, while passing a different value as a prop, the component will not update the UI because it will keep the state from the first time it was rendered. This will make the application very prone to errors.

Solution? Make the components stateless. They are easier to test because they render an output based on an input. Have the parent component contain the passed-in data as its own state, then if the state changes, it re-renders its children, while passing in everything they need through props.

What does this look like in React?

Read More

React.js Lessons Learned

React.js lessons learned and some opinionated best practices. This is an ongoing compilation and in-progress work of everything I have learned from developing in React.js

Basic Component Organisation

    propTypes: {},
    mixins: [],


    // Store removeChangeListener
    // Third-party code initialization here...
    // Store addChangeListener here...



Flux Pattern Tips


  • Stores don’t have to just represent data. They can also represent application state, whether modals are shown/hidden, or if user is online/offline, etc.
  • Stores can and probably will need to use data from other stores. Use waitFor method in dispatcher.
  • Stores should not contain the logic to fetch themselves on a server; it is better to use DAO (data access objects) that are thinly wrapped API objects to get the data.


  • Actions should be split into two types: view actions and server actions. This will separate user interaction such as clicking a button from retrieving data.
  • Actions should be fire and forget and must not have callbacks. If you need to respond to the result of an action, you should be listening for a completion or error event This enforces data being kept in the store and not on a component.
Read More

React Best Practice Compilation

Here is a compilation of best practices I have learned and compiled building React.js applications. Feel free to add to this content by visiting my blog source and sending a pull request.

jQuery vs React

jQuery Style

Event Handler <—> Change DOM

React.js Style

Event Handler –> State –> render()

  • Event handler changes state. React does a diff in virtual dom and renders.

Properties vs State

  • Properties are immutable.
  • State is mutable. State is only reserved for interactiviy. Therefore, anything that is going to change, it goes into State. State should store the most simplest values.

  • Props in getInitialState is an anti-pattern.
  • Avoid duplication of “source of truth” which is where the real data is. Whenever possible, compute values on the fly to ensure they don’t get out of sync later on and cause maintenance trouble.
  • It is often easier and wiser to move the state higher in component hierarchy
  • You should never alter state directly. Call it through setState

  • Most importantly, the state of a component should not depend on the props passed in (state as in state of the component - not of the app). *A huge “code smell” is when you start seeing the state depending on the props. For example, this is not good: constructor(props){ this.state = { fullName: ${props.first} ${props.last}}}. These guys should go in the render

  • Leave calculations and conditionals to the render function.
  • Note that everytime state is changed, render is called again.
Read More

How to share/sync directory inside Vagrant

To share directories, add this to your config. config.vm.synced_folder "host/relative/path", "/guest/absolute/path". Below is an example within a full configuration file.

Vagrant.configure(2) do |config|
	# note that you can have this config multiple times but
	# it should only be used for source code since there is
	# heavy performance penalty on heavy I/O such as database files

	# first path is the host's path, which can by absolute
	# or relative to project's root directory

	# second path is the guest path, and it must be absolute.
	# It will always be created if it does not exist.
	config.vm.synced_folder ".", "/vagrant"

	config.vm.synced_folder "./some/dir/one", "/one", create:true
	config.vm.synced_folder "./some/dir/two", "/two", create:true

	# note that you can also add type. NFS is the faster
	# bidirectional file syncing. In order for this to work,
	# the host machine must have nfsd installed. It comes
	# preinstalled on OSX and is a simple package install on Linux
	config.vm.synced_folder "." "/vagrant", type: "nfs"

	# owner/group
	config.vm.synced_folder "." "/vagrant", owner: "root", group: "root"
Read More

How to install Vagrant and VirtualBox

Vagrant provides easy to configure, reproducible, and portable work environments controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team. If you’re a developer, Vagrant will isolate dependencies and their configuration within a single disposable, consistent environment, without sacrificing any of the tools you’re used to working with (editors, browsers, debuggers, etc.). Once you or someone else creates a single Vagrantfile, you just need to vagrant up and everything is installed and configured for you to work. To get started, download Vagrant and VirtualBox.

Quick Start

vagrant init hashicorp/precise32
vagrant up

Sample Working Vagrantfile

Create a file and call it Vagrantfile

Vagrant.configure(2) do |config|
	config.vm.box = "precise64"
	config.vm.network :forwarded_port, guest: 80, host: 8800
	config.vm.provision "shell", path: "provision.sh"

Once you have that, run this command

Read More

State is an Anti-Pattern

As much as possible, do not use state at all. According to this comment, state should never have been in the library in the first place. It seems that if Flux was introduced in conjunction with React initially, state would not even exist. It seems that state was used to allow react to function by itself without flux. If a React component must have side-effects, it must use Flux actions instead of having state. Components ideally should have no state at all.

Read More

New Portfolio Site

I finally put my portfolio site at jcviray.com. I’m currently available for freelance/remote work and I specialize in Angular, React, Node, Mobile, Responsive. Let’s connect at linkedin! Also, let’s connect at twitter.

Read More

How To Reduce Docker Image Size?

Docker containers built from Dockerfiles can grow very big in size. There are a few simple tricks to cut back on some of the container fat. Here are some of the ones I’ve used. ## Clean the APT RUN apt-get clean RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* ## Flatten the Image ID=$(docker run -d image-name /bin/bash) docker export $ID | docker import – flat-image-name Then, you can save it for backup too. ID=$(docker run -d image-name /bin/bash) (docker export $ID | gzip -c > image.tgz) gzip -dc image.tgz | docker import - flat-image-name

Read More