Outlook hangs with new account with missing credentials

Yesterday, at work, I was adding a group mailbox that I believed that I had access to, to my outlook 2010 client. For some reason only Microsoft knows about, this forces a restart of the outlook client.

It turned out that I didn’t have the permissions required to this shared mailbox, and when I started outlook it kept asking for username and password for that mailbox.

When I clicked “cancel”, outlook stopped responding for a long time, so navigating to the menu where I could remove the account again took an eternity.

The quick way to remove the account from outlook is, surprisingly, to use the control panel. There is a “Mail” function there. It takes you to the same mail account management dialog as from whithin outlook, only difference being that because outlook is closed, it doesn’t try to open the mailboxes, so I could remove the shared mailbox until I got the permission for it today.

gentoo gnunet build fails with MHD_post_process linker error

gnunet ebuild (zugaina layman overlay) fails with linker errors about MHD_destroy_post_processor and MHD_post_process ?

Add to /etc/portage/package.use:

net-libs/libmicrohttpd  messages

emerge libmicrohttpd again, and then emerge gnunet.

Success!

(at least for me)

RHEL6 apache httpd virtual host the proper way

My recipie for name based virtual hosts in separate directories on RHEL:

We place all the virtual hosts under a new directory tree /var/www/vhosts:

# yum install httpd
# mkdir /var/www/vhosts
# semanage fcontext -a -t httpd_sys_content_t /var/www/vhosts
# restorecon -Rv /var/www/vhosts
# mkdir -p /var/www/vhosts/{site1,site2,site3}/{logs,htdocs}
# chown -R apache:apache /var/www/vhosts

I recommend using the FQDN of each site instead of the words “site1”, “site2”, in these examples.

Create the file /etc/httpd/conf.d/vhosts.conf with appropriate content such as:

NameVirtualHost *:80

<VirtualHost *:80>
  ServerName site1
  DocumentRoot /var/www/vhosts/site1/htdocs
  CustomLog "/var/www/vhosts/site1/logs/access.log" common
  ErrorLog "/var/www/vhosts/site1/logs/error.log"

  <Directory "/var/www/vhosts/site1/htdocs">
     Options None
     AllowOverride All
     Order Deny,Allow
     Allow from 127.0.0.1
  </Directory>
</VirtualHost>

<VirtualHost *:80>
  ServerName site2
  DocumentRoot /var/www/vhosts/site2/htdocs
  CustomLog "/var/www/vhosts/site2/logs/access.log" common
  ErrorLog "/var/www/vhosts/site2/logs/error.log"

  <Directory "/var/www/vhosts/site2/htdocs">
     Options None
     AllowOverride All
     Order Deny,Allow
     Allow from 127.0.0.1
  </Directory>
</VirtualHost>

and so on

(Dont forget to set the Directory permissions properly. Above is just an example!)

Then activate the goodness:

# apachectl restart

Why is this method good?

1. Creating the vhosts.conf in conf.d doesn’t modify any vendor-supplied files, which means that we won’t lose them if we reinstall the package.

2. Keeping each virtual host and its logs under its own directory tree makes maintenance a breeze and permissions can be separated to give developers access to specific vhosts only.

officially best way to get up to date LAMP on RHEL6

Q: How do I update php, mysql, and apache on RHEL6 without breaking stuff?

A: Use the great packages from IUS!

1. set up the IUS repo

$ wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/6/x86_64/ius-release-1.0-11.ius.el6.noarch.rpm
$ wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/6/x86_64/epel-release-6-5.noarch.rpm
$ sudo rpm -Uvh ius-release*.rpm epel-release*.rpm

2. make sure you have an up to date ca-certificates bundle installed.

3. See what php packages are available: yum list | grep -w ius | grep ^php

4. The “downside” (a minor inconvenience) of the greatness of the IUS is that the packages they build provides the same things as the original outdated redhat packages, but don’t obsolete them. This is intentional, and what makes me think it is the best way to obtain a current LAMP on RHEL or CentOS. What this boils down to is that the IUS packages have different names but cannot be installed at the same time as the RedHat/CentOS packages.
This means that we must uninstall the original packages (if they are installed) before we can install the more recent IUS packages.

IUS provides a neat yum plugin called “replace”, that can be used to do this en masse for a whole bunch of packages based on a certain name. If you have the stock packages “php”, “php-devel”, “php-common” and “php-cli” installed, you can “upgrade” them to the IUS php54 equivalents with a pretty oneliner like “yum replace php –replace-with php54“! (If you want to use the plugin, first install it with: “sudo yum install yum-plugin-replace“).

5. install the IUS packages the usual way if not using the replace plugin.

In case of RHEL6, postfix (terribly outdated 2.6.6) requires mysql-libs, so you cannot install mysql55 straight away. What I did was two-steps:

# yum erase postfix
# yum install postfix php54 mysql55-server

This means that I uninstalled postfix which was dependent on mysql-libs, and then reinstalled it at the same time as php54 and mysql55. Then it uses mysql55-libs instead.

================================================================================
 Package          Arch      Version               Repository               Size
================================================================================
Installing:
 mysql55          x86_64    5.5.31-1.ius.el6      ius                     9.1 M
 mysql55-server   x86_64    5.5.31-1.ius.el6      ius                     9.6 M
 php54            x86_64    5.4.16-1.ius.el6      ius                     2.7 M
 postfix          x86_64    2:2.6.6-2.2.el6_1     rhel-x86_64-server-6    2.0 M
Installing for dependencies:
 apr              x86_64    1.3.9-5.el6_2         rhel-x86_64-server-6    123 k
 apr-util         x86_64    1.3.9-3.el6_0.1       rhel-x86_64-server-6     87 k
 apr-util-ldap    x86_64    1.3.9-3.el6_0.1       rhel-x86_64-server-6     15 k
 httpd            x86_64    2.2.15-28.el6_4       rhel-x86_64-server-6    821 k
 httpd-tools      x86_64    2.2.15-28.el6_4       rhel-x86_64-server-6     73 k
 mailcap          noarch    2.1.31-2.el6          rhel-x86_64-server-6     27 k
 mysql55-libs     x86_64    5.5.31-1.ius.el6      ius                     783 k
 mysqlclient16    x86_64    5.1.61-1.ius.el6      ius                     3.8 M
 perl-DBD-MySQL   x86_64    4.013-3.el6           rhel-x86_64-server-6    134 k
 perl-DBI         x86_64    1.609-4.el6           rhel-x86_64-server-6    707 k
 php54-cli        x86_64    5.4.16-1.ius.el6      ius                     2.6 M
 php54-common     x86_64    5.4.16-1.ius.el6      ius                     894 k

Transaction Summary
================================================================================
Install      15 Package(s)

That’s all, folks!

Error: Cannot retrieve repository metadata (repomd.xml) for repository: epel. Please verify its path and try again

Error: Cannot retrieve repository metadata (repomd.xml) for repository: epel. Please verify its path and try again

I tried installing EPEL on a fresh install of RHEL6, and after adding the repo, yum fails with the above error. I have RHEL6.1 (Santiago) and get the above error.

This happens because the RHEL/CentOS installation doesn’t trust the HTTPS certificate used by mirrors.fedoraproject.org, that is issued by “GeoTrust SSL CA“.

In my case the package ca-certificates was not installed and the /etc/pki/tls/certs/ folder didn’t contain any ca-bundle.crt or ca-bundle.trust.crt !

Solution: yum install ca-certificates

(I had to temporarily rpm –erase epel-release first, to get yum working again)

I once got the same error message eventhout ca-certificates was installed and up to date. Then it was a firewall blocking https (port 443) traffic.

I worked around that by changing from https to http in /etc/yum.repos.d/epel.repo

On PRISM, the NSA, Google, Facebook and the Echelon

Q: Are European politicians upset that America is spying and storing data on all its citizens or just that the fact has become public?

In my opinion, that this was going on should have been obvious for every top-politician that is not totally clueless about their own country’s intelligence operations.

It should also have been obvious for every half-clever internet user such as myself too. However, things that we don’t see and that makes us uncomfortable, tend to be repressed, not talked about, and practically forgotten.

I guess that makes the question rhetorical, implying that the problem is that it has become public, but I also would think that most politicians, given that they (subconsciously?) knew what was going on, still was overwhelmed when they fully understood the scale of things.

My personal awareness level: I know that Google logs everything, I know what kind of technical traces I leave when I browse the web. (I use the Firefox plugins DNT+, ABP, and NoScript, and I don’t have flash player or java in the web browser. I do however load images automatically in the browser, even linked from other sites.) This should make me leave a lot less unnecessary traces than most people. Sure, Google knows “me” and my search history, most likely even after I log out from their services, but that’s probably a price I can live with for using their search engine.

I have closed my Facebook account (kind of silly to call it “deleted”, right? It’s just inaccessible to everyone outside Facebook’s datacenter).

What bothers me incredibly much about “the PRISM incident” is that in the first denial statements I read from Google and Facebook, they were very explicit in talking about access to their servers. Anyone working with networks and intrusion detection/prevention systems knows that all high-end network equipment has capabilities of mirror ports, that is, to output all traffic that passes through the equipment on a separate port. This is used for exactly the purpose of monitoring. We use it to analyze network traffic for anomalies, the NSA use it to copy the communications of the PRISM participants. In Sweden, the FRA use it for all traffic passing through the geographic borders. So, denying backdoors and server access in their datacenters are just smokescreen words for the ignorant masses. They were no lies, but the purpose was to make people think they were not feeding the NSA with data about their users, which of course is not true.

The NSA don’t want server access, they want to tap off the communications, and store it in their own datacenters.

Did you know that their new datacenter in Utah has a Yottabyte scale storage capacity? That’s right, 24 zeroes. That’s huge beyond imagination. So, thinking that they only listen in on communications, without storing and analyzing it, would be ultra-silly.

About 10-15 years ago there was talk about Echelon. Many people thought it was unrealistic and that the descriptions were exaggerated. I wonder if it was. At least today it is not.

RHEL6 package name for libdb is db4

Close to impossible to understand, but I just spent quite some time to figure out the package name for the Berkeley DB, libdb on RedHat (RHEL6).

Silly me. I should have known that the package is called “db4” and nothing else. After figuring that out, tacking on a “-devel” to get the headers package was piece of cake.