For whatever reason, you may have gotten collections in MongoDB that you cannot access or delete, due to their name. Maybe you led the collection name with a number, instead of a letter. Your collection name may be illegal or inaccessible for any of a variety of reasons. In my case, I had collections with names like the following: tmp.mr.mapreduce_1368742878_ec2-123-456-789-012.compute-1.awesomecloud.host_20534
In this case, it’s likely the dashes are causing issues. That’s a mouthful, so I’ll just refer to that string as ILLEGALLYNAMEDCOLLECTION. However, what’s more interesting is how you access that data. You can’t count it like so:
You also can’t drop() it that way. How do you access it? Use getCollection.
So long, illegally named collection!
Changed the path to an executable? If the old path is in your path, you need to rebuild the hash of paths. On tcsh, you did this by running ‘rehash’. On bash, run:
What if you want to turn off hashing completely? Run the following, or add to your .bashrc if you want it permanent:
Got a debian package that is so old, it’s no longer found in repositories? Need a specific version from a server, but don’t have the original deb file? Repack it with dpkg-repack.
sudo apt-get install dpkg-repack
sudo dpkg-repack YOUROLDPACKAGENAME
Voila – instantly recreated package. Now fix your code so you can run current software again.
I recently had this experience: when I pinged a hostname, it took a long time for ten pings to go through — like 45 seconds:
$ ping -c 10 my.example.com
PING my.example.com (10.9.8.7) 56(84) bytes of data.
64 bytes from 10.9.8.7: icmp_req=1 ttl=50 time=52.7 ms
64 bytes from 10.9.8.7: icmp_req=2 ttl=50 time=54.4 ms
64 bytes from 10.9.8.7: icmp_req=3 ttl=50 time=52.8 ms
64 bytes from 10.9.8.7: icmp_req=4 ttl=50 time=58.9 ms
64 bytes from 10.9.8.7: icmp_req=5 ttl=50 time=57.0 ms
64 bytes from 10.9.8.7: icmp_req=6 ttl=50 time=59.2 ms
64 bytes from 10.9.8.7: icmp_req=7 ttl=50 time=52.8 ms
64 bytes from 10.9.8.7: icmp_req=8 ttl=50 time=59.1 ms
64 bytes from 10.9.8.7: icmp_req=9 ttl=50 time=64.1 ms
64 bytes from 10.9.8.7: icmp_req=10 ttl=50 time=53.2 ms
--- my.example.com ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 45570ms
rtt min/avg/max/mdev = 52.746/56.482/64.160/3.691 ms
But, if I pinged by IP address, it went through with normal time:
$ ping -c 10 10.9.8.7
PING 10.9.8.7 (10.9.8.7) 56(84) bytes of data.
64 bytes from 10.9.8.7: icmp_req=1 ttl=50 time=52.5 ms
64 bytes from 10.9.8.7: icmp_req=2 ttl=50 time=51.1 ms
64 bytes from 10.9.8.7: icmp_req=3 ttl=50 time=53.2 ms
64 bytes from 10.9.8.7: icmp_req=4 ttl=50 time=60.2 ms
64 bytes from 10.9.8.7: icmp_req=5 ttl=50 time=54.0 ms
64 bytes from 10.9.8.7: icmp_req=6 ttl=50 time=59.1 ms
64 bytes from 10.9.8.7: icmp_req=7 ttl=50 time=59.1 ms
64 bytes from 10.9.8.7: icmp_req=8 ttl=50 time=58.9 ms
64 bytes from 10.9.8.7: icmp_req=9 ttl=50 time=54.8 ms
64 bytes from 10.9.8.7: icmp_req=10 ttl=50 time=54.3 ms
--- 10.9.8.7 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9011ms
rtt min/avg/max/mdev = 51.161/55.751/60.235/3.128 ms
The ping summary stats themselves were the same, just the overall time was longer. What’s going on here? I little net research, and I had my answer: reverse DNS. I hadn’t mapped up the reverse DNS for this yet, so when I pinged by hostname, it tried to reverse map the IP in the results for each ping. You can disable this with the -n flag, and results become normal again:
$ ping -c 10 my.example.com -n
PING my.example.com (10.9.8.7) 56(84) bytes of data.
64 bytes from 10.9.8.7: icmp_req=1 ttl=50 time=61.1 ms
64 bytes from 10.9.8.7: icmp_req=2 ttl=50 time=53.2 ms
64 bytes from 10.9.8.7: icmp_req=3 ttl=50 time=59.0 ms
64 bytes from 10.9.8.7: icmp_req=4 ttl=50 time=59.0 ms
64 bytes from 10.9.8.7: icmp_req=5 ttl=50 time=53.6 ms
64 bytes from 10.9.8.7: icmp_req=6 ttl=50 time=59.4 ms
64 bytes from 10.9.8.7: icmp_req=7 ttl=50 time=51.2 ms
64 bytes from 10.9.8.7: icmp_req=8 ttl=50 time=59.2 ms
64 bytes from 10.9.8.7: icmp_req=9 ttl=50 time=54.3 ms
64 bytes from 10.9.8.7: icmp_req=10 ttl=50 time=52.9 ms
--- my.example.com ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9010ms
rtt min/avg/max/mdev = 51.224/56.339/61.156/3.404 ms
So if you have long ping times by host name, check for reverse DNS – it’s probably not mapped.
Does it really matter what programming language you use? Of course. Anyone who says otherwise is being naive. There are some more obvious considerations when choosing a language outside of those parameters. These include:
- Suitability of the language to the problem you are trying to solve
- Availability of libraries
- In-house expertise
- Stability / security
- Resource / fund availability
When you pick a language you should make sure it is the language suited for the job. However, the following considerations are decidedly less obvious:
- Availability of talent
- Likelihood of expertise for a buyer
- Standardization of maintenance / ease of transfer
The first point is key: how easy is it to recruit talent in the language you pick? This is going to have a direct impact on the speed with which you can scale your development team. The remainder has to do with the end game: how attractive you can make your company for an acquisition.
I have worked at a few companies that have built applications around Perl. This is not a glamorous language. Despite the fact that this is a quite robust language with extensive library support, Perl does not get the buzz. This makes it hard for recruiting purposes: while at times it feels like you can throw a rock and hit a Ruby or PHP programmer, finding a good Perl programmer is a bit like finding a unicorn. (Yes, that’s hyperbolic, but you get my point.) The same is true on the sale side: anyone who is looking to acquire the company will have to invest in the talent to actually maintain that code base, and the same talent difficulty applies.
However, this shouldn’t necessarily force you to go out and immediately rewrite your application in a more popular language. Keep this rule in mind: build using the resources you have. Do you have existing talent who is strongest in a less popular language? Build with that. You’ll get to market sooner, and waste less time. If you are worried about the end game when you are just starting, you may need to rethink your priorities. I have been with companies that have built their product around Perl that were acquired. Perl can in fact be used in high trafficked, scalable web sites. (Just ask Amazon, Craig’s List, IMDB, or Live Journal.) Just because recruiting may be slightly more challenging and asset transfer more complicated does not mean you cannot find success.
So does language choice matter? Yes, of course, though, maybe not as much as some people might think. It may have an impact on your ability to scale your technical team, and makes for a bullet point for your acquisition pitch. However, speed to market is more important. Also, people buy products, not languages.
As with most things startup-related, you have to take into account a multitude of factors to make your decisions. Make sure language choice is not your only consideration. Focus on your product, and the rest will follow.
I recently reviewed someone’s bash code, and noted their use of getopt. I had always been using getopts, so was at first confused (due to syntax), then puzzled: which one is better, getopt or getopts?
Getopt is older, but is a separate binary. It tends to be pretty robust, and supports long options (i.e., you can use –foo instead of just single letter options like -f). Getopt will also re-arrange the parameters.
Getopts is newer but built into the shell. Its syntax tends to be simpler to use.
Let’s see some quick examples of usage:
#!/bin/bash # getopt.sh example # Execute getopt ARGS=$(getopt -o a:b:c -l "ay:,bee:cee" -n "getopt.sh" -- "$@"); #Bad arguments if [ $? -ne 0 ]; then exit 1 fi eval set -- "$ARGS"; while true; do case "$1" in -a|--ay) shift; if [ -n "$1" ]; then echo "-a used: $1"; shift; fi ;; -b|--bee) shift; if [ -n "$1" ]; then echo "-b used: $1"; shift; fi ;; -c|--cee) shift; echo "-c used"; ;; --) shift; break; ;; esac done
And now getopts:
#!/bin/bash # getopts example while getopts a:b:c flag; do case $flag in a) echo "-a used: $OPTARG"; ;; b) echo "-b used: $OPTARG"; ;; c) echo "-c used"; ;; ?) exit; ;; esac done shift $(( OPTIND - 1 ));
Getopts does seem much simpler. Let’s run both, and see what happens:
$ ./getopt.sh -a "opt a" -b opt_b arg1 -a used: opt a -b used: opt_b -c used $ ./getopts.sh -a "opt a" -b opt_b arg1 -a used: opt a -b used: opt_b -c used $
As expected, the output is the same. However, let’s make one small change: let’s put the argument first.
$ ./getopt.sh arg1 -a "opt a" -b opt_b -c -a used: opt a -b used: opt_b -c used $ ./getopts.sh arg1 -a "opt a" -b opt_b -c $
Whoops! Remember how getopt.sh re-arranged parameters while getopts didn’t? You need to put the arguments last, or getopts gets confused. getopt will actually re-arrange the parameters to put the options first, then add a ‘–’, then put the arguments. Let’s hack our getopt.sh script to see what happens.
#!/bin/bash # getopt.sh - modified to show $@ contents echo "BEFORE GETOPT: $@"; # Execute getopt ARGS=$(getopt -o a:b:c -l "ay:,bee:cee" -n "getopt.sh" -- "$@"); echo "AFTER GETOPT: $@"; #Bad arguments if [ $? -ne 0 ]; then exit 1 fi eval set -- "$ARGS"; echo "AFTER SET -- \$ARGS: $@"; while true; do case "$1" in -a|--ay) shift; if [ -n "$1" ]; then echo "-a used: $1"; shift; fi ;; -b|--bee) shift; if [ -n "$1" ]; then echo "-b used: $1"; shift; fi ;; -c|--cee) shift; echo "-c used"; ;; --) shift; break; ;; esac done echo "AFTER OPTION PROCESSING: $@";
And the output:
$ ./getopt.sh arg1 -a "opt a" -b opt_b -c BEFORE GETOPT: arg1 -a opt a -b opt_b -c AFTER GETOPT: arg1 -a opt a -b opt_b -c AFTER SET -- $ARGS: -a opt a -b opt_b -c -- arg1 -a used: opt a -b used: opt_b -c used AFTER OPTION PROCESSING: arg1
Neither method is wrong (and I’m sure there are more tweaks to each style I could do), but I think I’m going to lean towards the getopt camp. It’s a little more work, but seems a little more robust.
Some of us old terminal jockeys were annoyed that bash completion was enabled on new Ubuntu installs. Here’s the description for the package:
bash completion extends bash’s standard completion behavior to achieve
complex command lines with just a few keystrokes. This project was
conceived to produce programmable completion routines for the most
common Linux/UNIX commands, reducing the amount of typing sysadmins
and programmers need to do on a daily basis.
The idea is pretty interesting: provide more intelligent tab completions to commands. For example, if you type ‘mysqladmin flush-p<tab>’ it will complete the term ‘flush-privileges’. Very neat.
In practice, those who are quite familiar with tab completion for files and directories may find themselves banging their heads on the tables. One of my fellow programmers was annoyed trying to get filename completion after a perl -d.
Uninstalling should be as simple as this:
sudo apt-get remove bash-completion
Unfortunately, that still leaves files behind, and hence – not really uninstalled. In fact, if you do a dpkg -l bash-completion, you’ll see it’s in status ‘rc’: removal desired, config files remain. Easily fixed:
sudo dpkg --purge bash-completion
Log out and log in, and voila – all gone!
If you’re sharing the system with other people who do like tab completion, you’re probably better just adding the following to your .bashrc:
With no arguments, this will remove all bash completion statements. You could also use that to fine tune what bash completion. This would remove bash completion for emacs, ack, and perl:
complete -r emacs ack ack-grep perl
To verify that it’s removed, try:
This prints all completions currently active.