OnePlus One unboxing

Charger and phone come in separate boxes.

Charger does not come with a cable. The phone does.

I guess it must be a statement for modularity and perhaps you get to buy only the accesories you need, if you need just a cable, you just buy the cable.

It’s a pretty power adapter, but not very functional, the contacts are not retractable like on many others.

The box was of really good quality material.

Custom shipping box for the phone, had a nice tab at the end of the rope used to open the box.

A white box, containing another box, good material again, but rather wasteful in my humble opinion.

Ta-da!

hopefully people will read the fine print and not get rid of this cover as it contains the devices IMEI and Serial number.

And this is the first Android device I have which I can’t open to remove the battery, so I’ll still be using my external battery for charging, hopefully the battery life will be as good as I expect as the phone didn’t come with a bunch of pre-loaded crap apps you can’t uninstall (dear AT&T and Samsung)

this material feels awesome, and the phone is pretty light.

and has a really cool looking USB cable.

I’m happy, now let’s hope I can find a rugged cover to protect it before it breaks, I tend to go running with my phones as I need gps tracking and like taking a picture here and there while I exercise.

First Impressions Lenovo Yoga Pro and Windows 8.1 with a 2 in 1 device.

After now almost 2 weeks of heavy duty use of this machine, I must say Windows 8.1 is not fully baked when it comes to its tiled/touch/app store experience, however it’s not the nightmare I expected.

The FrostWire 5.7.0. currently circulating on the Internet was built on this machine, the experience was quite pleasant as a workstation, and then I carried it around for the North American Bittorrent conference and during that time it was a very convenient tablet while I was doing all the social networking during the event.

WordPress: Cannot create directory error, got everything chmoded to 777?

So you went as far as chmod’ing your entire folder structure to 777 (very unsafe), you’ve hacked wp/wp-admin/includes/file.php

return new WP_Error( 'mkdir_failed_ziparchive', __( get_current_user() . ' Could not create directory. ('.$_dir.')' )...

to print out exactly what directory it cannot create and what user is trying to create the folder, everything is correct, yet it won’t create the fucking folder?

the issue might be your vsftpd configuration!

Go to /etc/vsftpd.con and make sure that you have this setting uncommented:

write_enable=YES

restart your vsftpd (sudo service vsftpd restart) and try upgrading again, you’ll want to hit that ‘Donate Bitcoin’ button if I saved your ass today.

Cheers

building cgminer from source on OSX

so you cloned the cgminer repo from github to build on your OSX machine and you get this bullshit error

$ ./autogen.sh
readlink: illegal option -- f
usage: readlink [-n] [file ...]
usage: dirname path
touch: /ltmain.sh: Permission denied
Use of chdir('') or chdir(undef) as chdir() is deprecated at /usr/local/bin/autoreconf line 670.
Configuring...
./autogen.sh: line 10: /configure: No such file or directory

readlink works differently in OSX and the current version of the autogen.sh script seems like it wasn’t tested on OSX (wonder why didn’t they use a simple bs_dir=`pwd`, the answer is probably canonical paths and what not).

To keep moving along, open the autogen.sh script and just change the value of the bs_dir variable to the full real path of where you have cloned the cgminer source code.

then execute your autogen script, make sure to enable compilation flags for your ASIC hardware, in my case I remember seeing ‘icarus’ on a binary build of cgminer I tried before, so I did

./autogen.sh --enable-icarus

you might want to enable all of them if you’re not sure what hardware you have or you will have in the future as you may not like the joys of building software (check out the the README for all the –enable-xxx options available)

If you’re getting errors on your configuration script due to missing dependencies, I strongly recommend you use Homebrew to install these packages (if you are using Macports or Fink, I strongly suggest you completely remove that crap from your computer and go 100% with brew, it works really well if you’re building a lot of code on a regular basis):

brew install autoconf automake autoreconf libtool openssl curses curl

brew, at the point of this writing didn’t have libcurl, so that one you will have to download, ./configure, make and sudo make install yourself from here http://curl.haxx.se/download.html (I used version 7.34 when I did it)

after that the autogen script should work, and then you should be just one ‘make’ away from your goal.

AWS troubleshooting: how to fix a broken EBS volume (bad superblock on xfs)

As great as EBS volumes are on Amazon Web Services, they can break and not ever mount again, even though your data could still be there intact, a simple corruption on the filesystem structure can cause a lot of damage. On this post I teach you how to move all that data onto a new EBS drive, so keep calm and read slowly.

So, you try to mount your drive after some updates and you get an error like this on dmesg | tail:

[56439860.329754] XFS (xvdf): Corruption detected. Unmount and run xfs_repair

so you unmount your drive, invoke xfs_repair and you get this…

$ sudo xfs_repair -n /dev/xvdf
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
..........................................

and no good secondary superblock is found.

Don’t panic, this is what you have to do next to solve this issue:

  1. Go to your AWS dashboard, EC2 section.
  2. Click on “Volumes”
  3. Find the broken volume.
  4. Create a snapshot of the broken volume (this takes a while)
  5. Create a new volume the same size (or larger than) your old drive out of the snapshot you just created (this takes a while)
  6. Attach your new volume to the same EC2 instance (no need to reboot or anything), if the old drive was mapped to /dev/xvdf, the new one will be mapped to /dev/xvdg (see how the last letter increases alphabetically)

Now here’s a gotcha, Amazon will not create your new drive using the same file system type (xfs), for some reason it will create it using the ext2 filesystem.

$ sudo file -s /dev/xvdg
/dev/xvdg: Linux rev 1.0 ext2 filesystem data (mounted or unclean), UUID=2e35874f-1d21-4d2d-b42b-ae27966e0aab (large files)

Here you have two options:
1. Live with the new ext2 file system, make sure your /etc/fstab is updated to look something like this:
/dev/xvdg /path/to/mount/to auto defaults,nobootwait,noatime 0 0

or 2. copy the contents of your drive to a temporary location, usually inside /mnt which has plenty of space from that ephemeral drive the ec2 instances come to, and then mkfs.xfs the new volume, and then copy the contents back… (which what I did, as I chose to create a larger drive and the ext2 format that came on the new volume only recognized the size of the snapshot)

Hope this saved your ass, leave a note if I did.

Remember to never do any irreversible action until you have a disk snapshot, try your best to never lose data.

How much electricity does the Facebook app consume everyday by making phones vibrate with push notifications?

My silly little goal is to convince Facebook to pre-configure its mobile apps so they don’t make phones vibrate by default and push silent notifications to mobile users.

Help me make an estimate, we need to know on average how many milliwatts the average phone will spend per notification, and an estimate of how many push notifications are received by smartphones worldwide from facebook.

Say a phone consumes 150 milliwatts per push notification, and in one minute alone facebook sends 1 million notifications, Facebook would be consuming the world about 9 Megawatts/hour. This would be just a number thrown out there since I don’t know for sure if that’s the average power consumed, nor I know how many notifications are received per minute, I bet it’s way more than 1 million per minute.

Would love the help from people who work at Facebook or who have stats on push notifications, as well as any engineer working with phone hardware at any major phone manufacturing company.

Twitter is also sending a lot of notifications to phones.

Turn off vibration on your Facebook app, you will save a considerable amount of battery every day.

can’t ssh to ec2 ubuntu instance, /etc/fstab breaks bootup due to missing ebs volume [SOLVED]

Screen Shot 2013-08-21 at 12.08.04 PM

So the /etc/fstab file on your root volume looked like this

LABEL=cloudimg-rootfs / ext4 defaults 0 0
/dev/xvdf /mnt/backups auto defaults,comment=cloudconfig 0 2

by mistake you deleted the ebs volume that you had mounted on /mnt/backups (or whatever folder) and you restarted your ubuntu instance not knowing that if the /etc/fstab would break it would not continue to start all the application layer networking services like ssh on port 22…

you can ping the machine, but you can’t ssh, amazon support won’t respond or will tell you to fuck yourself.

you learn that ubuntu has had this bug for a while, but it’s been addressed by passing your volume configuration a nobootwait option.

you wish your /etc/fstab looked like this, but you can’t get in, amazon doesn’t give you any other options from their console to go in and solve the problem through a console…

LABEL=cloudimg-rootfs / ext4 defaults 0 0
/dev/xvdf /mnt/backups auto defaults,nobootwait,comment=cloudconfig 0 2

No worries, I have a fix that will let you edit that file, and boot back and try to recover things, you may have lost that ebs volume, but you won’t have to setup this computer again.

1. Make a snapshot of the root volume on that instance. This will take a while.
2. Make a new ebs volume of that snapshot and put it on the zone where the ec2 instance lives.
3. Create an identical temporary new ec2 instance on the same zone.
4. Attach the snapshot volume you created on step 2 to the new instance.
5. ssh to the new machine.
6. sudo fdisk -l, you should see all the attached devices, you will see something like this referring to the attached ebs

Disk /dev/xvdf: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdf doesn't contain a valid partition table

Don’t listen to that last message, you do have a valid partition.

7. Create a folder where to mount the disk. sudo mkdir /mnt/old-volume
8. Mount it sudo mount -t auto /dev/xvdf /mnt/old-volume
9. Get into /mnt/old-volume/etc/fstab and fix it.
10. Unmount /mnt/old-volume, turn off the instance, detach the repaired volume.
11. Turn off the original instance, detach the broken root volume (at /dev/sda1)
12. Attach the repaired volume to the original instance under /dev/sda1
13. Start the original instance.
14. ssh to it. (it will have a new ip address, make sure to update your DNS or load balancing entries)
15. Terminate the temporary instance and all the volumes that you won’t need.
16. Get to work.
17. Leave a tip below. 😉

19 Reasons to switch to eBooks/eReaders

So I’m tired of evangelizing eBooks/eReaders in person and I guess I’ll do a lot more good by writing this so that you can share it next time you want to convince a friend to live in the year 2013 and stop the mad romanticism about the handicapped physical books, it’s just ludacris reading a book on paper unless it has no digital form.

Here is an ever growing list of why I prefer eBooks to physical books
(got more reasons, leave a comment below)

  1. They are cheaper.

  2. They are available immediately, no need to order, wait, or move physically to get them.

  3. They never get lost.

  4. They don’t take any space, or weight, this brings many added benefits to the world:

4.1. You can have a library of thousands of books with you wherever you are, on different devices (since they can be stored in the cloud)

4.2. You will free a lot of shelve space at home/office, which also means less dust being created at home.

4.3. If all students were forced to use eBooks they wouldn’t have to carry such heavy backpacks which can deform their spines.

4.4. If all students used eBooks exclusively, there would be CO2 emission reductions since that’s a lot of weight that doesn’t have to be transported by cars/buses/trains.

  1. You can read them on different devices: e-readers, computers, smartphones.

  2. You can copy and paste.

  3. If you lend them you never have to beg your friend to give the book back, it comes back to you automatically.

  4. You can search inside them.

  5. If you don’t know the meaning of a word, a dictionary is always there for you, just touch/click the word in question.

  6. The same book may come in different languages.

  7. You can change font types, font sizes, color of the screen, margins, line spacing.

  8. You can have lots of bookmarks, you can navigate your bookmarks.

  9. You can read them with the light turned off.

  10. You can read them with one hand, turning pages is effortlessly. Awesome when you go out for a walk and you have only one hand available.

  11. No more wrinkled, stained, or broken pages.

  12. You can share your highlights on social networks.

  13. You can open a web browser right from the book if there’s a web reference.

  14. There’s no such thing as “out of print”

  15. It learns your reading speed and tells you how much time left you have on the current chapter or the whole book.

need more reasons?
or you will keep reading books because you like the smell of paper, even though there’s really a lot of stinky books out there.

are you still using cassette tapes, vinyl discs, hell are you still using CDs?

Make the switch, you will enjoy reading like never before.

Google Glass to enrich Google Maps/Earth with fine grained 3D Models of reality

I was imagining a world maybe 5 years from now, where there’s an insane amount of Google-Glass-like devices out there.

If Google was bold enough to map the streets with cars equipped with GPS and cameras, I think one of the top uses they must have somewhere on their master plan is to use every Glass user’s video feed to map reality to a more fine grained level. It’s easy to see how this data could be used to create 3D detailed models of every city in the world, not just out on the street, but inside buildings to provide us with navigation everywhere, outside or inside.

This will def. be another level of creepy, I foresee a battle to preserve the privacy inside homes and office once this starts happening. How do you prevent someone else from not mapping the inside of your home or house and uploading it to the web?