RHEL6 apache httpd virtual host the proper way

My recipie for name based virtual hosts in separate directories on RHEL:

We place all the virtual hosts under a new directory tree /var/www/vhosts:

# yum install httpd
# mkdir /var/www/vhosts
# semanage fcontext -a -t httpd_sys_content_t /var/www/vhosts
# restorecon -Rv /var/www/vhosts
# mkdir -p /var/www/vhosts/{site1,site2,site3}/{logs,htdocs}
# chown -R apache:apache /var/www/vhosts

I recommend using the FQDN of each site instead of the words “site1”, “site2”, in these examples.

Create the file /etc/httpd/conf.d/vhosts.conf with appropriate content such as:

NameVirtualHost *:80

<VirtualHost *:80>
  ServerName site1
  DocumentRoot /var/www/vhosts/site1/htdocs
  CustomLog "/var/www/vhosts/site1/logs/access.log" common
  ErrorLog "/var/www/vhosts/site1/logs/error.log"

  <Directory "/var/www/vhosts/site1/htdocs">
     Options None
     AllowOverride All
     Order Deny,Allow
     Allow from 127.0.0.1
  </Directory>
</VirtualHost>

<VirtualHost *:80>
  ServerName site2
  DocumentRoot /var/www/vhosts/site2/htdocs
  CustomLog "/var/www/vhosts/site2/logs/access.log" common
  ErrorLog "/var/www/vhosts/site2/logs/error.log"

  <Directory "/var/www/vhosts/site2/htdocs">
     Options None
     AllowOverride All
     Order Deny,Allow
     Allow from 127.0.0.1
  </Directory>
</VirtualHost>

and so on

(Dont forget to set the Directory permissions properly. Above is just an example!)

Then activate the goodness:

# apachectl restart

Why is this method good?

1. Creating the vhosts.conf in conf.d doesn’t modify any vendor-supplied files, which means that we won’t lose them if we reinstall the package.

2. Keeping each virtual host and its logs under its own directory tree makes maintenance a breeze and permissions can be separated to give developers access to specific vhosts only.

On PRISM, the NSA, Google, Facebook and the Echelon

Q: Are European politicians upset that America is spying and storing data on all its citizens or just that the fact has become public?

In my opinion, that this was going on should have been obvious for every top-politician that is not totally clueless about their own country’s intelligence operations.

It should also have been obvious for every half-clever internet user such as myself too. However, things that we don’t see and that makes us uncomfortable, tend to be repressed, not talked about, and practically forgotten.

I guess that makes the question rhetorical, implying that the problem is that it has become public, but I also would think that most politicians, given that they (subconsciously?) knew what was going on, still was overwhelmed when they fully understood the scale of things.

My personal awareness level: I know that Google logs everything, I know what kind of technical traces I leave when I browse the web. (I use the Firefox plugins DNT+, ABP, and NoScript, and I don’t have flash player or java in the web browser. I do however load images automatically in the browser, even linked from other sites.) This should make me leave a lot less unnecessary traces than most people. Sure, Google knows “me” and my search history, most likely even after I log out from their services, but that’s probably a price I can live with for using their search engine.

I have closed my Facebook account (kind of silly to call it “deleted”, right? It’s just inaccessible to everyone outside Facebook’s datacenter).

What bothers me incredibly much about “the PRISM incident” is that in the first denial statements I read from Google and Facebook, they were very explicit in talking about access to their servers. Anyone working with networks and intrusion detection/prevention systems knows that all high-end network equipment has capabilities of mirror ports, that is, to output all traffic that passes through the equipment on a separate port. This is used for exactly the purpose of monitoring. We use it to analyze network traffic for anomalies, the NSA use it to copy the communications of the PRISM participants. In Sweden, the FRA use it for all traffic passing through the geographic borders. So, denying backdoors and server access in their datacenters are just smokescreen words for the ignorant masses. They were no lies, but the purpose was to make people think they were not feeding the NSA with data about their users, which of course is not true.

The NSA don’t want server access, they want to tap off the communications, and store it in their own datacenters.

Did you know that their new datacenter in Utah has a Yottabyte scale storage capacity? That’s right, 24 zeroes. That’s huge beyond imagination. So, thinking that they only listen in on communications, without storing and analyzing it, would be ultra-silly.

About 10-15 years ago there was talk about Echelon. Many people thought it was unrealistic and that the descriptions were exaggerated. I wonder if it was. At least today it is not.

Collaborative Storytelling!

Great news for everybody that loves to read and write fictional litterature!

I’ve found a really good site for collaborative storytelling: CoST.LI – where writing stories together is great fun! The site is very new, and improving with new features almost daily. Currently it features a nice ranking system, quite similar to the reputation mechanism of sites like StackOverflow and its sister sites in the stack exchange community. There is also a nice toplist where you can compare yourself to others.

Best of all, it’s totally free (some google ads are meant to support it, good luck on that one!), and it uses OpenID for authentication.
Give it a try! Its multilingual, currently with free stories in English and there are some impressive ones in Swedish too.

So, what is Collaborative storytelling? Simply put: someone writes a story, and another one can continue on it. One of the most interesting features on this site is that each chapter can have several continuations, so there can be potentially an unlimited number of stories in the end.

New online game!

Hi! A close friend has published a new online multiplayer business strategy board game.

The name is a bit corny: Ape Broker, but the idea is really cool. If you remember the old windows game Oil Baron from 1992, Ape Broker is based on the same idea, but instead of being strictly turn-based, and requiring the players to share the same mouse and keyboard, the new addictive game has a bunch of new features, making it playable over the internet.

For the true fans, the author has even made it possible to gamble for real money, by participating in “ante” games, where the winner gets the other players’ anted amount.

Check it out at www.apebroker.com !

Howto enable Permalinks in WordPress on LAMP

In my last post about setting up WordPress on a LAMP, I omitted how to successfully enable permalinks, something you of course want to have on your blog.

Permalinks in WordPress is a way to have pretty URL:s to your posts to make it easy to link directly to them, and give them a human-readable format.

For it to work in a LAMP environment, you need to have mod_rewrite enabled in apache, and the tricky part when it comes to WordPress, is to enable mod_rewrite on the directory where your blog resides.

Most linux distros default to rather sane settings, and typically have something like this in them:

    <Directory "/var/www/html">
        Options FollowSymLinks MultiViews
        AllowOverride None
        Order allow,deny
        Allow from all
    </Directory>

that means (if /var/www/html is your DocumentRoot), that WordPress’ way of using .htaccess to control the rewriting with mod_rewrite, will not work as intended (due to the AllowOverride None directive).

The fix is easy. Just add something like this to your apache config:

<Directory "/var/www/html/wordpress">
    AllowOverride All
</Directory>

And you’ll be all set! (replace /var/www/html/wordpress above with the directory where you have wordpress installed. The same directory where wordpress created the .htaccess file when you enabled permalinks)

Step-by-step guide to set up wordpress on an existing LAMP system

0. If you get permission denied on the shell commands, try prefixing them with “sudo ”

1. Log in as root in mysql:

lamp$ mysql -u root -p

2. Create a database and MySQL user dedicated to wordpress:

mysql> CREATE DATABASE blogdb CHARACTER SET utf8;
Query OK, 1 row affected (0.00 sec)

mysql> CREATE USER 'blogger'@'localhost' IDENTIFIED BY 'b1ogpw';
Query OK, 0 rows affected (0.05 sec)

mysql> GRANT ALL PRIVILEGES ON blogdb.* TO 'blogger'@'localhost';
Query OK, 0 rows affected (0.63 sec)

3. download and unpack wordpress into a directory served by apache:

lamp$ cd /var/www/html
lamp$ wget http://wordpress.org/latest.tar.gz
lamp$ tar zxf latest.tar.gz
lamp$ rm latest.tar.gz
lamp$ cd wordpress

4. configure wordpress with the database details and secret keys (edit all occurrences of the word “here”). Use the online generator to get good values for the secret keys:

lamp$ vi wp-config-sample.php
lamp$ mv wp-config-sample.php wp-config.php

5. run the wordpress installation script from a web browser. The URL will be like this:

http://your-lamp/wordpress/wp-admin/install.php

6. Add the name for your blog, a user name (admin), password, your email address, and click the “Install WordPress” button att the bottom.

7. Done! Now log in, delete the sample post and sample page, and start customising your wordpress site.

Good luck!

Thoughts on fake SSL certificates for web sites

As you know, a while ago, an intruder to one of comodos affiliates were able to issue SSL certificates for:

  • mail.google.com
  • login.live.com
  • login.yahoo.com (three different)
  • login.skype.com
  • addons.mozilla.org
  • www.google.com
  • “global trustee”

The reason for the identity theft was probably a dictatorship state planning to implement a man-in-the-middle attack, silently monitoring the HTTPS traffic to the above sites.

It would be possible when you have control over all DNS traffic in and out of the country, to spoof all the DNS replies, so for instance the A record for login.yahoo.com points to your proxy with the bogus certificate installed to decrypt the traffic, and just resending the request to the real https://login.yahoo.com/ site.

My suggestion (at least for security-aware techies): An addition to the web browser that remembers the certificate fingerprint, issuer, and expiry date of your favorite HTTPS sites.

Each time you visit an HTTPS site, a simple local lookup will compare the sites certificate with the remembered value, and if it has changed, present the user with a notice and a choice to cancel or investigate. For instance if mail.google.com changes from a Verisign certificate to a smaller CA (Comodo, StartCom, etc.) long before the expiry date, you may want to think twice before continuing..

See Comodo’s blog for more info.

Comments are always welcome.