Allow users of a certain group to run a command without sudo passwd

Time to time I usually find myself typing sudo to execute commands requiring sudo rights, and this is repetitive which simply means one gets weary of typing sudo password every time, hence this blog post. Reason for this is to remind me how to do it the next time I am faced with such conundrum(s).

Suppose I wanted to add a group of users who are allowed to run mount and umount without passwords. So I first want to add a group called “staff”

sudo groupadd staff

Next we need to edit the /etc/group and add the users

staff:407:

will be present ,hence append users you want to add the users separated by commas.

staff:x:407:user1,user2,...

Now we need to configure sudo to allow members of the “staff” group to actually invoke the mount and umount commands.

You just need to add the following lines to /etc/sudoers or execute sudo visudo

%staff ALL=NOPASSWD: /sbin/mount, /sbin/umount

Now sudo mount won’t ask password but since it is a pain in the butt typing sudo all the time, we can avoid it by doing the following:

I can create the following script called “/usr/bin/mount” (and similar script for umount)

#! /bin/sh
sudo /sbin/mount $*

To make this slightly more secure, We might want to change the ownership of these scripts to the “staff” group.

chgrp staff /usr/bin/mount /usr/bin/umount

and then make them executable only for the group “staff”

chmod g+x /usr/bin/mount  /usr/bin/umount

Note:Depending on the OS you are using please check where mount and umount commands are located. It might be in /bin/ instead of /sbin.
So you might have to make necessary changes

 

Done

Setting up an MQTT server on Debian Jessie

Jolabs Tech Blog

Intro

Today we`re going to be setting up our own home automation server in a dedicated linux server. It’s gonna host our platform using the the Node.Js environment. The different devices around the house are going to be speaking MQTT to this server. I’ve chosen MQTT because of it’s robustness and lightweightness allowing for easy deployment on my favourite controllers: ESP8266. The esp-01, still the cheapest to this date,only sports 4mb of flash, has no trouble at all controlling a couple of relay’s for now. In the future the newer/bigger boards can be used to get more IO and function, but for now I’m going to be using the esp-01’s extensively around the house flexibly (it’s wireless…).

Requirements

The idea is to have the server online 24/7, so consider a computer thats not to power-hungry; like a Raspberry Pi or a small embedded box computer. Alternatively you can boot up…

View original post 340 more words

Move files by date into different directory using CLI

A while back I was unfortunate in such a way that my HDD that I primarily use as photo archive started having mechanical issues, and before It died on my arms I had the opportunity to backup most of my pictures, however most of them were truncated and had to be discarded and the ones I could recover their filenames where overwritten with random filenames and ascii codes.

For over a while I have neglected my pictures and they just piled up in a single and nameless directory, until I took it against myself to clean them up.

And one of the benefits of using Linux OS is the scripting features.

for i in *; 
do d=$(exiftool $i | grep 'Date/Time Original' | cut -f 17 -d ' '); 
newd=`echo ${d//:/-}`; 
echo $i ${newd} ;
mkdir -p ${newd}; 
mv -- ${i} ${newd}; 
done
  1. write this script to a file named mvByDate.sh in your old directory.
  2. replace the /your/new/directory with your new directory’s path
  3. make this file executable with chmod +x mvByDate.sh
  4. then execute this file by ./mvByDate.sh
  5. voila

Replacing A Failed Hard Drive In A Software RAID1 Array

Credit goes to Author:falko

Replacing A Failed Hard Drive In A Software RAID1 Array. This guide shows how to remove a failed hard drive from a Linux RAID1 array (software RAID), and how to add a new hard disk to the RAID1 array without losing data.

NOTE: There is a new version of this tutorial available that uses gdisk instead of sfdisk to support GPT partitions.

1 Preliminary Note

In this example I have two hard drives, /dev/sda and /dev/sdb, with the partitions /dev/sda1 and /dev/sda2 as well as /dev/sdb1 and/dev/sdb2.

/dev/sda1 and /dev/sdb1 make up the RAID1 array /dev/md0.

/dev/sda2 and /dev/sdb2 make up the RAID1 array /dev/md1.

/dev/sda1 + /dev/sdb1 = /dev/md0

/dev/sda2 + /dev/sdb2 = /dev/md1

/dev/sdb has failed, and we want to replace it.

2 How Do I Tell If A Hard Disk Has Failed?

If a disk has failed, you will probably find a lot of error messages in the log files, e.g. /var/log/messages or /var/log/syslog.

You can also run

cat /proc/mdstat

and instead of the string [UU] you will see [U_] if you have a degraded RAID1 array.

3 Removing The Failed Disk

To remove /dev/sdb, we will mark /dev/sdb1 and /dev/sdb2 as failed and remove them from their respective RAID arrays (/dev/md0 and/dev/md1).

First we mark /dev/sdb1 as failed:

mdadm --manage /dev/md0 --fail /dev/sdb1

The output of

cat /proc/mdstat

should look like this:

server1:~# cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[2](F)
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]

unused devices:

Then we remove /dev/sdb1 from /dev/md0:

mdadm --manage /dev/md0 --remove /dev/sdb1

The output should be like this:

server1:~# mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1

And

cat /proc/mdstat
should show this:

server1:~# cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0]
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]

unused devices:

Now we do the same steps again for /dev/sdb2 (which is part of /dev/md1):

mdadm --manage /dev/md1 --fail /dev/sdb2

then,

cat /proc/mdstat

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0]
24418688 blocks [2/1] [U_]

md1 : active raid1 sda2[0] sdb2[2](F)
24418688 blocks [2/1] [U_]

unused devices:

mdadm –manage /dev/md1 –remove /dev/sdb2

server1:~# mdadm --manage /dev/md1 --remove /dev/sdb2
mdadm: hot removed /dev/sdb2

Now,
cat /proc/mdstat

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0]
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0]
24418688 blocks [2/1] [U_]

unused devices:

Then power down the system:

shutdown -h now

and replace the old /dev/sdb hard drive with a new one (it must have at least the same size as the old one – if it’s only a few MB smaller than the old one then rebuilding the arrays will fail).

4 Adding The New Hard Disk

After you have changed the hard disk /dev/sdb, boot the system.

The first thing we must do now is to create the exact same partitioning as on /dev/sda. We can do this with one simple command:

fdisk -d /dev/sda | sfdisk /dev/sdb

You can run

fdisk -l

to check if both hard drives have the same partitioning now.

Next we add /dev/sdb1 to /dev/md0 and /dev/sdb2 to /dev/md1:

mdadm –manage /dev/md0 –add /dev/sdb1

server1:~# mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: re-added /dev/sdb1

mdadm –manage /dev/md1 –add /dev/sdb2

server1:~# mdadm --manage /dev/md1 --add /dev/sdb2
mdadm: re-added /dev/sdb2

Now both arays (/dev/md0 and /dev/md1) will be synchronized. Run

cat /proc/mdstat

to see when it’s finished.

During the synchronization the output will look like this:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
24418688 blocks [2/1] [U_]
[=>...................]  recovery =  9.9% (2423168/24418688) finish=2.8min speed=127535K/sec

md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/1] [U_]
[=>...................]  recovery =  6.4% (1572096/24418688) finish=1.9min speed=196512K/sec

unused devices:

When the synchronization is finished, the output will look like this:

server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
24418688 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]

unused devices:

That’s it, you have successfully replaced /dev/sdb!

Cool Bash Tricks

Create ~/.inputrc and fill it with this:

“\e[A”: history-search-backward
“\e[B”: history-search-forward

This allows you to search through your history using the up and down arrows … i.e. type “cd” and press the up arrow and you’ll search through everything in your history that starts with “cd”.

It’s a little bit like ctrl-r (mentioned in many of the comments below), but anchored to the start of the line, and the arrow keys allow you to scroll back and forth between matches.

I use it when I’m looking to (for instance) call up the last ping I did (hit p, up arrow, return), whereas I use ctrl-r more like search, when I’m trying to find a command based on an argument or option that I used.

Both useful.

Other options that I find useful to add:

set show-all-if-ambiguous on

This alters the default behavior of the completion functions. If set to ‘on’, words which have more than one possible completion cause the matches to be listed immediately instead of ringing the bell. The default value is ‘off’.

set completion-ignore-case on

If set to ‘on’, Readline performs filename matching and completion in a case-insensitive fashion. The default value is ‘off’.

(as miah points out below, this is all actually Readline functionality. The title should be “Readline is the single most useful thing in everything” 😉

Include eps in Latex

Recently I wanted to include a directed graph in a LaTeX doc­u­ment. unfortunately I couldn’t import an svg file to my document. So which meant I had to find another alternative method rather than converting the file to any other image format to avoid rasterization.

$ sudo apt-get install inkscape

$ inkscape -z image.svg image.eps --export-eps=image.eps

to receive the EPS(Encapsulated Postscript Vector graphics) file which I wanted to include in LaTeX. Run­ning Gummi with pdfla­tex for type­set­ting on a file con­tain­ing

\begin{figure}[!htb]
\centering
\includegraphics[scale=.7]{image.eps}
\caption{image}
\label{fig:image}
\end{figure}

how­ever returns the error

! LaTeX Error: Unknown graphics extension: .eps.

This can be resolved by adding

\usepackage{epstopdf}

which allows to con­vert eps images to pdf for use with pdfla­tex. This works perfect.

How to mount Google Drive on your Linux(Ubuntu) system

I have always been a fan of cloud storage as anything can happen on my mechanical harddrive, but now that I have completely migrated to Ubuntu and having to realise that the is no support whatsoever for Google drive applications, I was on a quest to find a working and reliable on. Then I stumbled upon Google Drive Ocamlfuse which is a FUSE filesystem backend for Google Drive which you can use to mount your Google Drive under Linux.

Link: https://github.com/astrada/google-drive-ocamlfuse

Installation

sudo add-apt-repository ppa:alessandro-strada/ppa
sudo apt-get update
sudo apt-get install google-drive-ocamlfuse

Usage

run:

google-drive-ocamlfuse

Make sure you are connected to the internet as this will open your browser and require access to your Google Drive. Click “Allow” and this will give the application access to your files.

Mounting

Now we mount GDrive to your system.

run:

mkdir -p ~/Google_Drive
google-drive-ocamlfuse ~/Google_Drive

That’s about it, unless you want to be heroic and modify changes in the configuration file which is  here:

~/.gdfuse/default/config

Automount

To mount Google Drive upon startup, run:

crontab -e

and insert

@reboot google-drive-ocamlfuse ~/Google_Drive

Or go here: https://github.com/astrada/google-drive-ocamlfuse/wiki/Automounting

Done!!!