Category Archives: Miscellaneous Geekery

Cost to Redact 10,000 Docs?

Assume you have 10,000 documents that average 6 pages each. Let us also pretend that we have an attorney on hand that can redact one page every every three minutes. Let’s also say this attorney charges $55 an hour for her services, and an identical attorney will be double checking each redaction for accuracy at a pace of 200 pages per hour.

If every page required redaction and all the variables held true across the entire life of the project, the total out of pocket expense for redacting 60,000 pages would run just over $180,000.

The key to cutting costs is to increase the pace of applying each redaction, increasing the accuracy to reduce time spent performing quality control checks, and reducing the cost of the hourly rate. To get a sense of what attorney rates are for 2017 check out the Salary Guide published yearly by Parker + Lynch at

Unfortunately, there is no magic bullet that exists in the tech industry that will automatically do all the work for us. There are some great tools available that assist with identifying and redacting specific text strings and patterns, but at the end of the day, this is still a manual process that is often very precise in nature.

When dealing with large volumes of information that require redaction the most cost-effective and accurate solutions available will be found from service providers that can blend the best technology available with the right group of experienced attorneys. To see how one provider addressed some of these challenges check out the recently released case study found at

Nuix 101: Replace an Encrypted Zip

This video will show you how to replace an encrypted zip file in your Nuix Case.

To see more Nuix how to videos visit their Youtube playlist here:

New York eDiscovery Practitioners Supporting a Good Cause

Courtney Fay
Courtney Fay

Please help a fellow eDiscovery guru, Courtney Fay, by supporting a good cause.  She’ll be riding in the “Tour de Cure” this weekend to raise money to fight diabetes along with several other members of the Harris Beach team.  Combined, Harris Beach has raised $1,400 for this year’s run.  Let’s help them reach their goal of raising $5,000 before Sunday! Details below.

Tomorrow I go to pick up my race packet for the Tour de Cure.  I am looking at my progress and asking, can I reach Champion statusby the time I pick up my packet?  Champion status means raising over $1,000.  So far I have raised $327, thanks to the wonderful support of so many people.  I think I can reach that goal of champion status, but I need to ask for your help.

My personal fundraising page is right here:

In 5 days, I will ride with our fantastic Harris Beach team, for the Tour de Cure.  I have a long way to go, towards reaching my goal, and I hope you can help me get there.  I have been a champion the last 3 years running.  This involves raising over $1,000.  With your support, I know I can do that again.

Thank you

For the past 3 years, I have ridden for the Tour de Cure, which is close to my heart.  The support I receive every year is overwhelming.  I would like to thank you for encouragement and sponsorship.  I could not do this without.  When I ride, I know you are all behind me.

I’m at it again!

I am riding in the Tour de Cure for the 4th year in a row.

Last year I raised over $1,000!! It was fantastic, and I want to see if I can beat that this year.  Yep, $1,500!  If you donated last year, thank you again.  I hope you can all support me this year, as I join loads of people in riding for this worthy cause.

Why I am riding

My grandfather died before I was born, from complications due to diabetes.  It was always a sadness for me, that I never got to know him.  I ride to find a cure and raise awareness, so that no other kids will miss out on getting to meet their grandfathers.  A colleague of mine has type 2 diabetes, a colleague of mine and a friend from high school both have kids with type 1 diabetes.  No child should have to grow up with such a limiting disease.  We can find a cure and it’s time that we make that happen.  I am excited to be in the fight.

Support Me in Tour de Cure!

I will be cycling in the American Diabetes Association’s Tour de Cure fundraising event. Please support me with a donation by selecting the “Sponsor Me” button. Our efforts will help set the pace in the fight against diabetes. So let’s get in gear and ride to Stop Diabetes!

Help Make a Difference in the fight against diabetes!

Each mile I ride, and the funds I raise will be used in the fight to prevent and cure diabetes and to improve the lives of all people affected by diabetes.

No matter how small or large, your generous gift will help improve the lives of nearly 24 million Americans who suffer from diabetes, in the hope that future generations can live in a world without this disease. Together, we can all make a difference!

Thank you,

Courtney Fay, CEDS
Litigation Support Developer

#CommandLineFu: 8 Deadly Commands You Should Never Run on Linux

Being a command line junky, I thought I’d share this fun little article I found the other day from @chrisbhoffman – 8 Deadly Commands You Should Never Run on Linux.

rm -rf / – Deletes Everything!

:(){ :|: & };: – Fork Bomb

mkfs.ext4 /dev/sda1 – Formats a Hard Drive

command > /dev/sda – Writes Directly to a Hard Drive

dd if=/dev/random of=/dev/sda – Writes Junk Onto a Hard Drive

mv ~ /dev/null – Moves Your Home Directory to a Black Hole

wget -O – | sh – Downloads and Runs a Script

Chris Hoffman is a technology writer and all-around computer geek. He’s as at home using the Linux terminal as he is digging into the Windows registry. Connect with him on Google+.

You Can Teach an Old Lawyer New Tricks

BP-Busienss-Man-Photo“Just print it out for me.”

Famous last words said by many an attorney in today’s technology-laden litigation field. There was a time, (cue nostalgic flashback music), when the largest matters would consist of a few hundred boxes of paper. Today, the sky’s the limit when it comes to the amount of records that can exist in a single case.

When the information age started spilling over into the legal profession, there were few, if any, technical solutions to the technology problem we were all facing. When a client inbox needed to be collected and reviewed for discovery, we did just that: “print it all out.” It did not take long before the very same approach would just about fill the Grand Canyon with dead trees.

As the amount of information being created by our business clients continued to grow, the technology designed to combat the problem got better and faster. Innovation came fast and hard over the next few years, resulting in the birth of eDiscovery.

Today, we find ourselves firmly in the adolescence of industry growth, with more specialists, uniform standards and best practices. The software solutions continue to advance at a trailblazing pace and, at times, even claim to be smarter than we humans. However, keeping up with this technology can be (and often is), a job in and of itself.

So, how are those tasked with supporting the efforts inside of a law firm supposed to keep pace? More importantly, assuming you can keep pace, how do you fight the “just print it out” mentality that exists in virtually all firms across the land? A recent study [1]was conducted that concluded that there is “a profound lack of technological savvy among law firms.” [2]

The most impressive advancements over the last couple of years have surrounded buzzwords like predictive coding and advanced analytics. In a nutshell, these are specialized software tools designed to speed up the pace of review to eliminate as much hourly billable labor as possible during the discovery process. One major caveat is, that if leveraged improperly, one can find herself in a pretty hairy situation. Thus, giving credence to the statement that technology is only as good as those using it [3]. This is no real secret and has played into the trepidation within law firms to hurry up and wait to see how other firms have implemented these new solutions.

The mission of this article is to provide an oversimplified how-to primer on leveraging one (of many), advanced analytical tools as a quality control measure prior to final document production, as opposed to an alternative review methodology. In other words, there’s no need to change the way that you’re currently managing and reviewing discovery documents. This will show you (and your reluctant attorney), how to dip your toe into the shallow end of the fancy whiz-bang tech world by taking the advanced out of advanced analytics.

Let’s assume that your firm has been engaged by your client to represent them in a bet the company type litigation that involves the collection, hosting, and review of a quarter-million documents. For the sake of simplicity, let’s assume all of the documents in question are emails (and their attachments), from 10 key players within the company over a time frame of five years. Your firm has already gone through the process of:

  • selecting a vendor to assist with collecting the email data in a forensically sound manner;
  • converting the records into a easily reviewable format;
  • culling them down using basic filters; and
  • hosting them in a web-based review platform that your attorneys can access from anywhere.

At this stage, the common practice is to have a war-room of contract attorneys conduct the first-pass review of these records for responsiveness and to identify any potentially privileged documents. Typically, after varied levels of quality assurance checks, what results is then produced to opposing counsel. This is a tried and true approach that has become its own industry since the economic downturn a few years ago.

photo.JPG  2448×3264The courts, unfortunately, have given very little leeway in terms of providing realistic deadlines for production, even though the amount of data keeps growing at a break-back pace. Some courts have even started imposing sanctions for consistently missing production deadlines. [4] As a result, the directive handed down in these scenarios is always that of speed. Review more documents in less time. In turn, being that we’re all human, this has led to a number of errors. The worst among them is producing a privileged document because it was improperly tagged. This can have some serious ramifications, because it is not just the document that is privileged, but also the entirety of its content. There are countless horror stories and case studies specifically dealing with the aftermath of this particular scenario. However, for the purposes of this example, we’ll take a look at one way to prevent this from happening by utilizing “Email Thread Analysis.”

Many service providers, due to the slow adoption of advanced and predictive technologies, have started offering these services at little to no additional cost. It should not take you long to find one amongst your existing approved vendor list that would jump at the opportunity to show off their wares.

In this scenario, let’s assume that you and I are two of ten key custodians in question. We both work in the research and development division of a successful tech startup company. I sent you an email back in 2006 that contains information about the project that we’re working on together. Within the body of this email, I make reference to a number of topics that would be considered trade secrets, deeming the record privileged under the parties’ agreement. You respond back the following day with all of the original text from my email in the body of your email. This continues back and forth over the next six weeks, totaling five emails before the email chain stops.

During a traditional linear review, each email within the thread would exist as its own record and designated to a batch to which a contract reviewer would be assigned. Let’s assume that three of the five emails were assigned to one reviewer, and the other two were assigned to another. During the review, these records typically do not get grouped together for side-by-side comparison. Being that all of the content in the original message that made the document privileged also exists in every other email in the thread, it is safe to assume that every email in the conversation should be flagged as privileged. Due to the review being conducted in such a manner, it is not uncommon for one of these emails to slip through the cracks.

The simple solution in this case is to ask your provider run a simple report leveraging email thread analysis technology to identify all anomalies of this nature. In return, you’ll have a list of every single document that is about to be produced that would otherwise contain privileged information. The screen shot below illustrates an example using a five email reply chain.

Paralegal Today Q1 2014

The first branch in this tree shows the originating email in this conversation flagged as privileged. You can tell this by the red circle indicator to the left of the subject line. The second email, or first reply, is also flagged as privileged. As soon as we get to the third email, we see the green tag indicating that this record was flagged as responsive. As you can see, all subsequent emails have been flagged as privileged.

A well-versed provider with the access to the right tools can automate the process of identifying every record that has been tagged within a conversation in contradiction with others.  Usually, you would have someone put an extra pair of eyes on the record in question, along with the surrounding communication to verify that this was not in error.

While this is not an end-all be-all solution to be leveraged across all matters, it is one of many weapons that you should have in your arsenal to prevent potentially costly and embarrassing missteps. Your attorneys will not have to learn anything new, and when asked how you magically found this document, you can say “I used advanced analytics”.



Kris Wasserman is a Sales Engineer and passionate technology evangelist with over 10 years of experience working hands-on with litigators, in-house counsel, and litigation support professionals in the face of complex ESI-laden matters and regulatory investigations as an eDiscovery Project Manager. He serves as one of many subject matter experts at Superior Discovery in New York City, providing technical sales support to the business development team. Kris has recently begun providing monthly educational seminars for attorneys and legal support staff for the sole purpose of streamlining the adoption of the latest technology solutions in a client-specific and practical manner.  For more information contact Kris at or follow him on Twitter @KrisWasserman.

[1] ILTA’s 2013 Technology Survey

[2] Does Technology Leap While Law Creeps?

[3] EDI-Oracle Study: Humans Are Still Essential in E-Discovery

[4] Perils of E-Discovery Reflected in Sanctions

This article was featured in “The Paralegal Today” magazine on May 1st, 2014. 

PT Q1 2014 vX.inddWhat’s Inside Each Issue
Here is an in-depth list of all the fabulous articles, columns and features found within the pages of Paralegal Today.

Mission Statement
As the only independent magazine serving the paralegal community, Paralegal Today ’s mission is to provide intelligent, thought-provoking and practical material to its readers, such as: career and technical information to help readers excel in the workplace, coverage of national news, trends and professional happenings that affect paralegals, as well as colorful and informative pieces on unique areas and people in the profession. We encourage our readers to reach out to us with their thoughts, questions and suggestions. Paralegal Today pledges to work tirelessly to benefit its readers in the most professional and unbiased manner possible.

Brief History
In 1983, Paralegal Today put together its first issue for the paralegal community. Then a Dallas-based national publication, the magazine promised to “be as helpful and approachable as a good friend next door.” In December of 1989, James Publishing Inc. of Costa Mesa, Calif., acquired Paralegal Today. With a new look and new ideas, Paralegal Today reached out to its readers with intelligent, timely and informative articles. Paralegal Today is the independent source of information on the paralegal field. Readers subscribe to Paralegal Today to stay abreast of developments within the field and to advance their careers. Utilizing its broad-based, national contacts and resources to keep pace with the evolution of the legal assistant profession, Paralegal Today provides the tools needed by every paralegal to attain his or her career goals. After all, what are good friends for?

#MetadataMatters – Don’t Be That Guy

I seldom get behind viral videos, but quite frankly, I’m shocked no one has thought of this yet. Thanks to my good friend Kris Taylor for passing it along. 


Check out for news aggregated from around the web geared towards the Legal Technology Industry.

CommandLineFu: File/Byte Count of Folder List

 while read -r dir; do echo -n `du -hsb "$dir"` ; echo "|"`find "$dir" -type f | wc -l` ; done < UD_input.txt | tee UserData.log


Uses text file with one directory per line as input and prints:


1842531456 ./FD99_UserShare/z/Bond/|213

CommandLineFu: FileType Report w/ Dates

More find magic — this little beast takes a while to run on large directories but is worth it’s weight in gold.  Now I just need a way to convert Epoch time to YYYYMMDD format inline. (New project)

find ./foo/ -type f -printf '%f|%h|%s|%AY%Am%Ad|%TY%Tm%Td|' -exec stat --printf "%W|" '{}' \; -exec file -bp '{}' \; > bar.log

%f == file name without leading directories

%h == leading directories without file name

%s == size in bytes

%A = Last access time (Y,m,D = YYYYMMDD format)

%T == Modification time (Y,m,D = YYYYMMDD format)

(using printf keeps everything on the same line)

stat %W == file birth date in Epoch time

file -bp == checks file type, b==brief, p==preserve date

CommandLineFu: FOR LOOP – report file count in pwd/* && print disk usage

for i in */ ;
        echo -n "$i:" >> "/path/to/some/file/already/created.txt" ;
        find "$i" -type f | wc -l >> "/path/to/same/file/already/created.txt" ;
        du -hs "$i" ;
exit 0

Quick and dirty…
Actually this is quite slow when dealing with directories containing thousands of little files.
But it gets the job done.  I’ll play around with it and see if forking helps.

CommandLineFu: Read list of filenames – test if they exist

while read -r file; do if [[ ! -e $file ]]; then echo "$file|error" ; fi ; done < input.txt

I’ll usually pipe the output to tee so I can watch what’s going on:

while read -r file; do if [[ ! -e $file ]]; then echo "$file|error" ; fi ; done < input.txt | tee error_report.txt

CommandLineFu: Split a Text file based on line numbers

Assumptions: I have a text file that contains 25+ Million lines, I want to split them into 100,000 line text files.

while [ $z -le 26 ]
    sed -n "$x,${y}p;${y}q;" tbl_001.txt > "t$z.txt"
    x=$(( $x + 100000 ))
    y=$(( $y + 100000 ))
    z=$(( $z + 1 ))

If you want to split by a different amount change the “y” variable, and the + whatever number to the number of lines you want.

The “z” variable is used as the filename, and the cutoff point.  If my original file only had one million lines I would change the “while” condition to 10 instead of 26.

I’m sure there’s a way to have the machine do the math for me.  But I don’t have the patience to hunt down how to do this right now.  I imagine it would have something to do with storing the line count (wc -l) in a variable, prompting the end user for the max line count (read $maxcount), and looping until the file is completely done.  (Not sure how to do this last part).  A project for another day.

Lotus Notes NSF Repair [8.5.2]

Error: “Database is corrupt; cannot allocate space” when opening database after server upgrade.


You have recently upgraded your Domino server from a prior Domino release. When users attempt to open their mailfile on the server, however, the following error displays:

“Database is corrupt; cannot allocate space”.

Resolving the problem

This error often indicates database corruption. In this case, you may be able to work around the issue by running Fixup, Compact and Updall as follows:

  1. fixup -f (This causes Fixup to check all documents in the database.)
  2. compact -i -c -d -K (ignore errors, copy-style, delete view indexes, set large UNK table)
  3. updall -R


MS Access: Get Extension from File Name


Assumes a single field labeled [name] contains a filename without the leading path. Should a file not have an extension, or a period contained within the resulting value will be blank.

CommandLineFu: Basic Filetype Report

A little more find magic:

find . -type f -printf '%d|%k|%f|' -exec file -F"|" -pi '{}' \;


find .

Start searching within the working directory.

-type f

Return only files, not directories

-printf '%d|%k|%f|'

Output tree depth, then a pipe, file size in bytes, then a pipe, then the filename without the leading directories, and a pipe.

-exec file -F"|" -pi

Execute “file” command on file found using a pipe delimiter (-F”|”)instead of the default “:” colon, additionally do not change last access times (-p), and output the file type in mime format (-i).

for dir in */; do a=${dir%%,*} ; find "$dir" -type f -printf '%d|%k|%f|' -exec file -F'|' -pi '{}' \; > "a$"_LIST.TXT & done ; wait

I use the above FOR loop when I have many folders that I need to generate a report for.

In this case all of my top level folders are named after a person “Last, First”, so I create a variable cutting everything after the first comma. I use that variable (the last name) to create an empty text file.  The results of the find command are appended to this text file.

The only obvious pitfall is if two folders have the same value before the comma.  Or two different people have the same last name.

CommandLineFu: Create Allegro Export Report

for i in {001..999} ; do a=`find ./$i/Native/ -type f 2>>/dev/null | wc -l` 
&& b=`find ./$i/Text/ -type f 2>>/dev/null/ | wc -l` 
&& c=$(($( wc -l ./Data/loadfile.txt 2>>/dev/null | awk '{ print $1 }')-1)) 
&& echo -n "$i," && echo "$a,$b,$c" ; done

Assumes all Allegro exports are located in the same top level directory, and follow a simple alpha-numeric sequence.

In the future I plan on building a simple script around this that prompts for a top-level directory so it can be run from anywhere on the network.  Additionally, storing the report to a text file and running subsequent calculations to determine any time a line item doesn’t match perfectly between Native, Text and Record count.

Allegro Export FOR loop

CommandLineFu: AWK – comma quote delimited file

awk -F'^"|","|"$' '{ OFS="|" ; print $2,$3,$4 }' somefile.txt

Use awk to parse a comma delimited file.

OFS=”|” outputs to a pipe delimited file.

Add one to all column references, $1 = $2, $2 = $3, etc.

CommandLineFu: Bash Netcat Copy

Have you ever had to copy millions upon millions of little files across your network very, very quickly? Have you exhausted all of your other command line hacks yet? Of course you have, or you wouldn’t be reading this. (Or you’re my mom.)

Ok… I get that the audience for this type of thing is rather limited. But this is one of those posts that will get more hits by me, then anybody else. This is strictly for demonstrating how to send thousands or millions of little files across a network using bash, tar, and netcat. (Mom you can stop reading now.. I don’t make a cameo in the video… you can skip this one. Thanks for the click though.)

The Code: One-liners are a beautiful thing

##Talking Box

tar -cz [source_dir] | nc [destination_ip] [destination_port]

##Listening Box

nc -l -p [local port] | tar -C [destination_dir] -xzf -

The setup:
Running cygwin-X from one of my XP boxes I tunneled into two different linux boxes (krispc7 == ubuntu 11.04 && kris@bt == backtrack 5).

ssh -X kris@krispc7
ssh -X kris@bt

I did this to work entirely within a native linux environment mainly because I’ve only ever done this with cygwin in the past. Also, so I can demonstrate everything on the same screen using terminator (my favorite GUI shell) and not have to run multiple desktop recorders. I don’t actually need the X forwarding, and I’m sure that my performance was lacking because of this. Additionally the files being copied were on a separate windows file server (we’ll call him e5). So that throws the whole speed thing out the window. Combine that with my extremlely verbose switches and you could probably print the files out of one machine, and physically scan them into the other machine quicker than the actual copy process took place.

Like I said… for demonstration purposes only. Running the compression and netcat instance on a third party machine is just plain stupid in this situation if you’re trying to move stuff really fast (not to mention that this particular hack box has no legs at all). The ideal environment would be to run the talkie box command on the actual talking box.

Moving on…

I ssh into bt (i know everyone roots into their bt boxes… but I don’t allow root to ssh anything), go to the network shared directory on e5 that contains the subfolders with the millions of little files and initiate the talkie side of the command. I then ssh into krispc7 and initiate the listening side of the command.

…Actually it’s the other way around… but you get the idea (“YOU”, is me talking to myself in my own post. Now I’m omnisciently referring to myself in the third person twice removed… and you thought you had problems.)

So listening box is listening, and talking box is waiting for me to hit enter. In the bottom right of the video is a simple while loop I used to count the number of new files in the destination directory.

I let the copy/nc job run for about ten minutes before I killed the video. But I cut a lot out while editing… so 10 minutes happens in less than two. (Who really wants to watch a video of files being copied?).

What’s happening here you ask?
Each file is being compressed on (what is supposed to be) the local machine and instead of being output to an individual zip or tarball file I’m simply redirecting the compressed data into netcat which sends the information over a tcp connection pointed at a specific port. The listening box in turn is monitoring the port defined (9998 in my video) for any and all incoming data and redirects it to be decompressed in the output location of choice.

Maybe tomorrow I’ll run a test that involves copying a bunch of stuff back and forth between two high-end machines (without any man in the middle), and compare the speeds when using different types of compression. Then compare those to a standard scp, windows drag and drop file copy, and my favorite… xxcopy.

Until then, enjoy the show. (Always launch the videos in full screen to watch in HD).

HOW TO: SKYPE for Dummies (and loved ones)

You can do it!

So I went to the doctor the other day and he said that I’m developing a form of ‘Repetitive Stress Injury’ in my neck.  My neck of all places!  I can only attribute this to how many times a day I shake my head back and forth followed by a self administered forehead slap. Why you ask?  Because no one in my tiny little sphere of influence knows how to use Skype?

Why am I such a hardcore proponent of Skype?  Because anything your app can do, Skype can do better!

  • Instant messaging (for all you keyboard junkies)
  • Group chat and Group video conferencing (few can compete with the group video conferencing feature)
  • A real phone number (you choose the area code)
  • Free calls to other Skype users (HELLO!)
  • Call forwarding (forward incoming skype calls to your cell phone)
  • Visual Voicemail (first to come up with this one, everyone else has been copying since)
  • File Transfers (not the fastest, but extremely easy to use)
  • Desktop/Screen Sharing (great for one on one How-To sessions)
  • Platform independant (OSX, Windows, Linux, mobile devices)
  • Integrates seamlessly with Facebook (see your news feed from the Skype home window)
  • Send and receive SMS messages (Can’t say I’ve ever used this one, but I did use a service once that converted my voicemails to text messages and sent them to my cell phone — kind of cool, although wrought with inaccuracies )
  • Unbelievably cheap international calling rates (for calling those people in countries where the internet doesn’t exist yet)
  • Outbound calling to anywhere in the US for $30 a year (untouchable)
  • An unbelievable amount of gadgets and specialized devices built just for Skype (my personal favorite –> Bluetooth Retro Handset [shameless plug])
  • A countless number of add-on apps (you name it — call recorders, faxing capabilities, lie detectors, translators, games, etc. etc.)
  • Integrates directly into web browsers (recognizes phone numbers on websites and converts them to links == single click to call)
  • To Go – Fake Forwarding (amazing feature that can be re-purposed for a number of clever uses)

The list goes on kids.  While some features cost a buck or two, most features the casual user will ever need are completely FREE.  Skype is a text book example of how to find success using the Freemium business model. Below is the first in what will likely turn into a series of Skype How-To videos.

This one is aimed at all those loved ones out there still using tin cans and string to reach out and touch when you’ve got a brand new, shiny, ultra-expensive mac laptop sitting in front of you. I can do separate videos for Windows and Linux if need be… just ask.


A little gem I came across today in the ##linux channel on freenode.

$ export HISTTIMEFORMAT="%F %R "

Inserts a time stamp into the history command.

RTFM Coffee Mug

I’m a night person.  There’s no doubt about it. And, according to very reliable sources, I have been this way since birth.  I’ve sent memos out, but the rest of the world has yet to modify their schedule to coincide with the hours in which I am most productive.  My clone, while good to have around when an extra liver is needed, would be useless in this regard as he would prefer to sleep the day away while I do all the work.  Furthermore, my attempts at post-natal gene modification of my peer group (tried to inject bat DNA into co-workers) didn’t yield the results I was hoping for, and now the girl in HR just looks at me funny.  Alas, I’m forced to modify my schedule to coincide with the rest of you day-walkers.  (I’m a giver… I know).

Unfortunately for me… I’m up till all hours of the night because my brain decides to finally join the rest of my body around midnight.  This, in turn, results in me losing about 2 to 5 hours of sleep each night.  By Friday morning I often find myself getting kicked off the subway on my way to work because I resemble the living dead.

To date, the best solution I have found to this problem (that doesn’t include using my friends as guinea pigs) has been caffeine.  The only downside to this solution is the logistics and ramp up time it takes to get said chemical into my bloodstream.  Walking around the office dragging an intravenous caffeine drip once again drew the ire of human resources.  To make matters worse, this ramp up time is wrought with all sorts of external stimulus that requires the use of my higher brain functions, namely speech.

In the spirit of single-handedly killing flying mammals I found an interactive portable caffeine container that also wards off those early morning requests without having to strain anything other than my basic motor functions.  It’s called a mug.  But not just any mug.  Usually a mug in my hand combined with an eyeball dangling from my skull is enough to repel the early morning onslaught.  But, some still require that I spell things out for them.

In beautiful eggshell white helvetica font is my canned response for all the ‘how-to’ requests that are asked of me prior to moonrise…. RTFM.

Shameless plug disclaimer:

From time to time I will use my webspace to touch on geek-ware or services that I find interesting.  While I’m not getting paid to hock these wears at the moment, steps are being taken to allow for referral links that would generate a small commission for me on certain products.  In light of this, and to assure everyone I’m not just a greedy capitalist pig, I will only provide these referral links for widgets and gadgets that I actually own and have purchased with my own hard earned money, or if it is something I really want (Christmas list page coming soon).  I will always distinguish between the stuff I own and endorse by providing a video or still shot showing me with said product.  Feel free to flame at will.


Browsing securely with firefox extensions from kris wasserman on Vimeo.

When you park your car at the mall do you leave all of the windows down, keys in the ignition, and your wallet sitting on the passenger seat?  Of course not, then why do you do it when you’re browsing the web?  The next time I get a call because you clicked on something you shouldn’t have, or you decided to run the latest and greatest in fake anti-virus software I’m just going to format your hard drive and install linux on your machine with the edubuntu kidsafe packages installed to protect you from yourself.

Help me, help you.  Take 5 minutes to protect yourself from the evil doers out there on the interwebs and install these firefox extensions.

1. Adblock plus —
2. Adblock plus pop-up blocker —
3. Redirect Remover —
4. Browser protect —
5. WOT —
6. No Script —

They’re all free, and for the most part, easy to use.  The only add on that requires the use of anything north of your brain stem is the ‘No Script’ add on.  If you do not plan on reading the instructions for this one prior to installing it, please watch the video for a demonstration on how to use it.

Also covered in this video, tips about secure http transmission, and setting up temporary fake email addresses.

This video broke the record for my longest tutorial to date, but I felt that this is one of the more important topics that I will ever discuss on this site.  There are no safe neighborhoods on the internet.  All of the bad guys live right next door to you.


One of the best oversights in clever programming ever has got to be Apple’s auto-correct feature.  It’s almost as if the spell check software was replaced by Freud.  So you’ve been texting back and forth with someone and your phone decided it was smarter than you and inserted the text you meant to say.   Now you need to take a screenshot of your conversation and share it with world.  To do so, hold down the home button (the only round button on the face of the phone).  While holding down the home button, tap the ‘wake/sleep’ button on the top of the phone.  You’ll see the screen flash to let you know that the screen contents have been captured.  Jump back to your home screen and launch the camera or photos app, and you’ll see your photo.  Now upload that thing to facebook and make mom proud!


xxcopy “k:\foo” “k:\bar” /e /tc /h /bb /k /pb /yy /oN..\barfoo.log

/e           ==           copy all files and folders recursively
/tc          ==           do not modify date and time stamps (create, mod, and access)
/h           ==           include hidden files
/bb         ==           skip file if it already exists
/k            ==           preserve source permissions
/pb         ==           display progress bar
/yy         ==           automatically reply “YES” to all prompts
/oN..\barfoo.log    ==           create a logfile in the directory above working directory named barfoo.log

( go to to download the latest version of xxcopy )

When running my own copy jobs I typically disregard the ‘Progress Bar’ switch because it has to sit there and calculate the number of files about to be copied (useless to us geeks). Eliminating the /PB switch allows the copy process to begin immediately.

If you want to get real clever create a batch file containing a single xxcopy command for each folder that you want to copy, and create a script to launch each one individually.

Copy Script 1:
xxcopy k:\copytest1 k:\outputtest1 /e /tc /h /bb /k /yy /oNcopy00.log

Copy Script 2:
xxcopy k:\copytest2 k:\outputtest2 /e /tc /h /bb /k /yy /oNcopy02.log

Copy Script 3:
xxcopy k:\copytest3 k:\outputtest3 /e /tc /h /bb /k /yy /oNcopy03.log

Launch Copy Scripts Script:
@echo off
##create destination folder structure
xxcopy k:\copytest k:\outputtest /t
##call on individual copy scripts
start call k:\bat0.bat
start call k:\bat1.bat
start call k:\bat2.bat
start call k:\bat3.bat
start call k:\bat4.bat
start call k:\bat5.bat
start call k:\bat6.bat
start call k:\bat7.bat
start call k:\bat8.bat
##kill unnecessary shells
taskkill /im cmd.exe /f

( Change video quality to HD/720p to watch in full screen )


find . -type f -iname “*.TXT” -exec sed -i ‘s/<< …-.-…….. >>//g’ ‘{}’ \;
*Assuming original Bates number uses the following format ABC-X-01234567

Depending on how cygwin is configured be sure to convert files back to ‘DOS’ format after using sed
find . -type f -iname “*.TXT” -exec unix2dos ‘{}’ \;

The find command is good if your files are not organized in a cleanly numbered subfolder structure, or are mixed in with other file types.  Usually simply globbing will do the trick a bit faster:

sed -i ‘s/foo/bar/g’ IMAGES/00/0[0-9]/*.TXT ; unix2dos IMAGES/00/0[0-9]/*.TXT

sed ==