Nginx — static file serving confusion with root & alias

There is a very important difference between the root and the alias directives. This difference exists in the way the path specified in the root or the alias is processed.

In case of the root directive, full path is appended to the root including the location part, whereas in case of the alias directive, only the portion of the path NOT including the location part is appended to the alias.

To illustrate…

Let’s say we have the config

        location /static/ {
                root /var/www/app/static/;
                autoindex off;
        }

In this case the final path that Nginx will derive will be

/var/www/app/static/static

This is going to return 404 since there is no static/ within static/

This is because the location part is appended to the path specified in the root. Hence, with root, the correct way is

        location /static/ {
                root /var/www/app/;
                autoindex off;
        }

On the other hand, with alias, the location part gets dropped. So for the config

        location /static/ {
                alias /var/www/app/static/;
                autoindex off;
        }

the final path will correctly be formed as

/var/www/app/static

See the documentation here: http://wiki.nginx.org/HttpCoreModule#alias

 

References:

http://stackoverflow.com/questions/10631933/nginx-static-file-serving-confusion-with-root-alias

How to setup Koel personal music streaming server in 10 simple steps with Laragon

Koel is a simple web-based personal audio streaming service written in Vue at the client side and Laravel on server side. Targetting web developers, Koel embraces some of the more modern web technologies – flexbox, audio and drag-and-drop API to name a few – to do its job.


Laragon offers you a fast, powerful and Isolated Development Environment. It is portable and very flexible.


Make sure Laragon is running, press Ctrl + Alt + T to open Terminal

Ctrl + Alt + T 

In Terminal, first jump your Document Root

cd C:\laragon\www

Clone koel project, and jump to the project

git clone https://github.com/phanan/koel.git && cd koel

Install npm-install-missing (this module will attempt to reinstall any missing dependencies).

npm install -g npm-install-missing

Install nodejs dependencies. (You can press Ctrl + T to open a new tab and run Step 6 simultaneously. If you find any errors, run the command again)

npm install 

Install php dependencies

composer install

Modify .env file

# After that, it can (and should) be removed from this .env file
ADMIN_EMAIL=login@email.com
ADMIN_NAME=leokhoa
ADMIN_PASSWORD=secret
....
DB_HOST=localhost
DB_DATABASE=koel
DB_USERNAME=root
DB_PASSWORD=

 

Click Start All button to start Apache & MySQL servers. Laragon will detect and make beautiful url: http://koel.dev
If not, right click to open menu, click Apache/Reload

 

Create MySQL database for koel

 mysqladmin -u root  create koel

Init database & done!

php artisan init

Now, navigate to http://koel.dev, you should have your personal music streaming server up & running

 

References

https://forum.laragon.org/topic/18/how-to-setup-koel-personal-music-streaming-server-in-10-simple-steps

Ansible Tip: Run local action on remote servers

Hope the following code block itself is clear enough

– name: appserver gathering fact
hosts: “appservers”
gather_facts: True
tasks:
– debug: msg={{ hostvars[inventory_hostname][‘ansible_default_ipv4’][‘address’] }}

– hosts: “127.0.0.1”
gather_facts: False
sudo: False
connection: local
tasks:
– name: Update Translation
shell: ‘curl -H “Content-Type: multipart/form-data” -F file=cxoLocalization.xlsx -X POST http://{{ hostvars[item].ansible_default_ipv4.address }}:8000/rest/translation/uploadTranslationDatasource’
with_items: groups[‘appservers’]

Fixed “unknown filesystem type ‘LVM2_member’ ” on Ubuntu

Steps

#sudo apt-get install lvm2

#vgs
VG #PV #LV #SN Attr VSize VFree
vg-rightscale-data_storage1 1 1 0 wz–n- 100.00g 10.00g

# vgscan
Reading all physical volumes. This may take a while…
Found volume group “vg-rightscale-data_storage1” using metadata type lvm2

#lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lvol0 vg-rightscale-data_storage1 -wi-ao— 90.00g

# modprobe dm-mod
# vgchange -ay vg-rightscale-data_storage1
# mount /dev/vg-rightscale-data_storage1/lvol0 /mnt/storage

Credit to http://www.itbox4vn.com/2011/06/fixed-unknown-filesystem-type.html

Using logs to build a solid data infrastructure (or: why dual writes are a bad idea)

must read

Confluent

This is an edited transcript of a talk I gave at the Craft Conference 2015. The video and slides are also available.

How does your database store data on disk reliably? It uses a log.
How does one database replica synchronise with another replica? It uses a log.
How does a distributed algorithm like Raft achieve consensus? It uses a log.
How does activity data get recorded in a system like Apache Kafka? It uses a log.
How will the data infrastructure of your application remain robust at scale? Guess what…

Logs are everywhere. I’m not talking about plain-text log files (such as syslog or log4j) – I mean an append-only, totally ordered sequence of records. It’s a very simple structure, but it’s also a bit strange at first if you’re used to normal databases. However, once you learn to think in terms of logs, many problems of…

View original post 6,628 more words

Install Redis in Ubuntu Trusty

Easy?
Just

sudo add-apt-repository ppa:chris-lea/redis-server
sudo apt-get update
sudo apt-get install redis-server

 

If you successfully install redis-server in Trusty, you’re lucky.

If you get the following error, here are my steps might help you.

apt-get -f install redis-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  redis-tools
0 upgraded, 1 newly installed, 0 to remove and 181 not upgraded.
55 not fully installed or removed.
Need to get 0 B/65.7 kB of archives.
After this operation, 260 kB of additional disk space will be used.
(Reading database ... 96508 files and directories currently installed.)
Preparing to unpack .../redis-tools_2%3a2.8.4-2_amd64.deb ...
Unpacking redis-tools (2:2.8.4-2) ...
dpkg: error processing archive /var/cache/apt/archives/redis-tools_2%3a2.8.4-2_amd64.deb (--unpack):
 trying to overwrite '/usr/bin/redis-benchmark', which is also in package redis-server 2:2.8.19-rwky1~trusty
Errors were encountered while processing:
 /var/cache/apt/archives/redis-tools_2%3a2.8.4-2_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

First, check redis-package

 apt-cache showpkg redis-server

got

Dependencies:
2:2.8.19-rwky1~trusty - sysv-rc (18 2.88dsf-24) file-rc (2 0.8.16) libc6 (2 2.14) adduser (0 (null)) redis-doc (0 (null)) redis-server:i386 (0 (null))
2:2.8.19-1chl1~trusty1 - libc6 (2 2.14) libjemalloc1 (2 2.1.1) redis-tools (5 2:2.8.19-1chl1~trusty1) adduser (0 (null)) redis-server:i386 (0 (null))
2:2.8.4-2 - libc6 (2 2.14) libjemalloc1 (2 2.1.1) redis-tools (5 2:2.8.4-2) adduser (0 (null)) redis-server:i386 (0 (null))

I don’t understand the result much, only see the package 2:2.8.19-1chl1~trusty1 has redis-tools. Ok We’ll go with it.

sudo apt-get install redis-server=2:2.8.19-1chl1~trusty1
..

Preparing to unpack .../redis-tools_2%3a2.8.19-1chl1~trusty1_amd64.deb ...
Unpacking redis-tools (2:2.8.19-1chl1~trusty1) ...
Selecting previously unselected package redis-server.
Preparing to unpack .../redis-server_2%3a2.8.19-1chl1~trusty1_amd64.deb ...
Unpacking redis-server (2:2.8.19-1chl1~trusty1) ...
Processing triggers for man-db (2.6.7.1-1) ...
Processing triggers for ureadahead (0.100.0-16) ...
Setting up redis-tools (2:2.8.19-1chl1~trusty1) ...


Setting up redis-server (2:2.8.19-1chl1~trusty1) ...
Installing new version of config file /etc/redis/redis.conf ...
Installing new version of config file /etc/logrotate.d/redis-server ...
redis-server start/running, process 1001
Processing triggers for ureadahead (0.100.0-16) ...

Done.

Ansible Tips

Skip tags

ansible-playbook -i hosts --skip-tags provisioning  site.yml

Include with tags

- include: deploy.yml tags=deploy

More control on using include

- include: python_sandbox_env.yml tags=deploy
  when: EDXAPP_PYTHON_SANDBOX

Dynamic tag naming

- name: "create {{ item }} application config"
  template: >
    src={{ item }}.env.json.j2
    dest={{ edxapp_app_dir }}/{{ item }}.env.json
  sudo_user: "{{ edxapp_user }}"
  tags: edxapp_cfg
  with_items: service_variants_enabled

Include Roles with Tag and Condition

  roles:
    - { role: foo, tags: ["bar", "baz"] }
    - { role: some_role, when: "ansible_os_family == 'RedHat'" }

Skip Running Role

- name: Deploy Webserver
  hosts: localohst
  vars_prompt:
    - name: run_common
      prompt: "Product release version"
      default: "N"

  roles:
    - { role: common, when: run_common == "Y" or run_common == "y" }

Override Variables

 ansible-playbook -i inventory/ setup_instances.yaml --limit=security_group_sb-devtest-app --tags=sb_deploy --extra-vars "build_file=staffingboss-20150228052745.tar.gz" -vvvv

List host using –list-hosts

ansible-playbook -i inventory/ setup_instances.yaml --limit=security_group_sb-devtest-app --tags=sb_deploy --extra-vars "build_file=staffingboss-20150228052745.tar.gz" --list-hosts

and see something like

playbook: setup_instances.yaml

Run common package? [N]:
  play #5 (Configure application servers): host count=2
    ec2-54-179-29-63.ap-southeast-1.compute.amazonaws.com
    ec2-54-254-42-103.ap-southeast-1.compute.amazonaws.com

Excluding a host from a playbook run ( From: https://coderwall.com/p/mnnjkg/excluding-a-host-from-a-playbook-run)

ansible-playbook --limit 'all:!bad_host' playbook.yml

Targeting hosts with multiple roles (https://coderwall.com/p/yolzoa/targeting-hosts-with-multiple-roles)
This one is cool tip since our deployment is multi-regional.
hosts

# Sydney data centre hosts
[sydney]
db-11
db-12
web-21
web-22
...
# Web servers
[webservers]
web-01
web-02
...

# LHS of load-balanced pairs
[left]
web-01
web-03
web-05

You can target hosts that are in the intersection of two or more groups using the limit option. For example:

ansible-playbook --limit 'sydney:&webservers:&left' playbook.yml

Accessing ec2.py dynamic inventory variables in templates using Ansible

I’m using Ansible to provision Tomcat, Memcached, Postgres.

With static inventory, I’m able to do this

hosts

[dbservers]
db.local
[appservers]
app1.local
app2.local

app/tasks/setup_tomcat_app.yml

- name: Setup SB ROOT context
template: src=root.xml.j2
dest=/var/lib/tomcat7/conf/Catalina/localhost/ROOT.xml
notify:
- Restart Tomcat
tags:
- sb_tomcat

app/templates/root.xml.j2

{% set dbserver = hostvars[groups['dbservers'][0]]['ansible_eth1']['ipv4']['address'] %}

<Resource name="jdbc/postgres/configuration"
auth="Container"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
type="javax.sql.DataSource"
driverClassName="org.postgresql.Driver"
url="jdbc:postgresql://{{dbserver}}:5432/configuration" ....

I’m able to run this task in appservers and retrieve dbserver’s information.

But when using dynamic inventory (EC2), things become harder. How to simulate the following?

{% set dbserver = hostvars[groups['dbservers'][0]]['ansible_eth1']['ipv4']['address'] %}

Answer: Google

but it takes time

Search Ansible Group: https://groups.google.com/forum/#!searchin/ansible-project/dynamic$20inventory

Lot of results return ( Looks like dyamic inventory is not transparent enough with Ansible).

This link http://goo.gl/eKJfxk is similar but doesn’t help much.

Then I found https://coderwall.com/p/13lh6w/dump-all-variables.

Lookup all variables provided by Ansible, the following line works

{% set dbserver = hostvars[groups['security_group_sb-devtest-db'][0]]['ansible_default_ipv4']['address'] %}

All of this could be simple or documented somewhere but I do hope Ansible newbie can find it useful.
I’d appreciate your comments.

Recover dead Docker container

My docker container was dead. It runs Postgresql service.

The root cause of error is

/var/lib/postgres permission was changed

What steps I did to recover it.

Access docker image

docker run -it postgres bash

Change permission

chmod -R 700 /var/lib/postgresql/9.3/main

Commit docker container

docker commit -m "correct postgres perms" jolly_goldstine

Run new container ( thanks to –volumes-from to restore data stored in dead container).

docker run -d -p 5432:5432 --volumes-from pgdb  --name pgdbnew -e POSTGRES_PASS="anypass" postgres; sleep 30

Check Postgres

PGPASSWORD=anypass psql -h localhost -p 5432 --username=postgres -c '\conninfo'

See result

You are connected to database "postgres" as user "postgres" on host "localhost" at port "5432"

Done.

The following links might or might not related but it might help you some ways.