jeudi 18 janvier 2024

LInux/Debian : removing unnecessary network bridges

Lately, I've been fooling aroung a lot with containers  (docker) and virtual machines (Vagrant, VirtualBox, ...)
Lots of fun, but an unexpected consequence is that this created a lot of network bridges on my machine.

Thus, this output:

$ nmcli device status
DEVICE             TYPE      STATE         CONNECTION      
wlo1               wifi      connected     eduroam         
br-16841f4ab5b3    bridge    connected     br-16841f4ab5b3 
br-6d277fff70af    bridge    connected     br-6d277fff70af 
br-70abc1ec9d13    bridge    connected     br-70abc1ec9d13 
br-7866640acd2f    bridge    connected     br-7866640acd2f 
br-804db4145aa9    bridge    connected     br-804db4145aa9 
br-989ab8bd5062    bridge    connected     br-989ab8bd5062 
br-c393b3577508    bridge    connected     br-c393b3577508 
br-c4e437f7076a    bridge    connected     br-c4e437f7076a 
docker0            bridge    connected     docker0         
virbr0             bridge    connected     virbr0  

Those "br-" network interfaces are actually "bridges", used to connect an interface to another (see here for details).

Not really bad, but things are getting a bit crowded here, so I wanted to get rid of these at once. Here we go: 

 * To get only the names of the bridges:

$ nmcli device status | grep br- | awk -e '{print $1}'
br-16841f4ab5b3
br-6d277fff70af
br-70abc1ec9d13
br-7866640acd2f
br-804db4145aa9
br-989ab8bd5062
br-c393b3577508
br-c4e437f7076a
To remove them, the command is $ nmcli device delete <device>
So, as a oneliner:
$ for a in $(nmcli device status | grep br- | awk -e '{print $1}'); do nmcli device delete $a; done

Et voila ! 

 

Other related commands:

dimanche 16 janvier 2022

Importing git repos from exported list

Another post on that topic (see previous post), useful when you want to clone your repos on a brand new disk, but only have access to the github/lab account. (yes, disk failures happen, believe me...)

Step 1:

For Github, you can export the list of your repos with the API:

curl https://api.github.com/users/USERNAME/repos
This returns a full JSON file, but you only need the repos url. To get the SSH-way, you can do this to get a raw file holding the list of URLs:
curl https://api.github.com/users/USERNAME/repos | grep ssh_url > github_repos
In that file, you will have one line per repo, like this:
    "ssh_url": "git@github.com:USERNAME/adepopro.git",

Step 2

Now, assuming you have stored your ssh key on your Github account, you can clone all repos at once with the following script. It does some text-processing (remove quotes, spaces, ...), then a git clone.
if [ "$1" = "" ]
then
    echo "filename missing, exit"
    exit 1
fi

while IFS=":"; read f1 f2 f3
do
# remove quotes
    f2b=$(sed s/\"//g <<< $f2)
    f3b=$(sed s/\"//g <<< $f3)
# remove comma at the end    
    f3c=${f3b/,/}
# remove space at the end    
    f2c=${f2b/ /}
# build path
    p="$f2c:$f3c" 
# clone
    echo "cloning $p"
    git clone "$p"
done < $1
Final note: haven't tried with Gitlab, but I assume it'll be more or less the same.

lundi 4 janvier 2021

Recovering set of git repos with new computer

I am a heavy Git user, I use it for mostly... almost everything! I have a lot of repos on my machine, connected to various online accounts (github, gitlab, but also others). Most of these are in kept in some dedicated folder (say /home/myname/dev, for example).

Now, I have a new computer. How do I import all these repos at once? Of course, I don't want having to clone them one by one manually! That would be ok for 1,2 or 3 repos, but I've got dozens.

So I just wrote these 2 scripts to 1-generate a list of remotes in a text file (on the "old" machine), and 2-automate cloning from this file on the new computer.
(Finding something similar on SO or elsewhere seems incredibly hard, thus I rewrote that, probably not the first one...)

First, on the old computer, drop the following script in the folder holding the repos, and run it:

# git_generate_url_list.sh
# Generate a list of git remotes that are in the current folder
# (also logs their sizes)
# S. Kramm - 2020-01-04

#!/bin/bash

a=$(ls -1d */)

echo "# repos list" > url_list.txt
echo "# repos size" > repos_size.txt
for i in $a
do
	echo "Processing $i"
	du $i -hs >> repos_size.txt
	cd $i; git remote get-url --all origin >> ../url_list.txt
	cd ..
done

This also assesses the sizes of each of these, can be useful to detect something going wrong...

Then, take that url list file on new computer, drop it in /home/myname/dev (or whatever location), along with this second script, and run it:

# git_clone_from_url_list.sh
# clone in current folder from a set of urls, from file given as argument
# S. Kramm - 2020-01-04

#!/bin/bash

if [ "$1" == "" ]
then
	echo "Missing filename!"
	exit 1
fi

echo "git cloning from file $1"

while read a
do
	if [[ ${a:0:1} != "#" ]] # if not a comment
	then
		echo "importing repo from $a"
		git clone $a
	fi
done < $1

Of course, if you use https, you will need to provide passwords for the private repos, but only once per online service.

Edit: also checkout the other post about this topic.

mercredi 15 janvier 2020

Gift-for-html

Annoucement:

samedi 6 juillet 2019

New C++11 State Machine library released

I just finished releasing version 0.9.3 of Spaghetti, a Finite State Machine library. It is provided with a full manual that demonstrates all the use cases.

It is a header-only single-file library, that should build with any C++11 compliant compiler.

Compared to its main competitors (Boost::statechart and Boost::msm), I had at present no time to build a exhaustive comparison, but I have the feeling that Spaghetti is quite easier to setup. On the other hand, it "might" not be as powerful, but again, that needs to be checked.

A lot of work still needs to be done (doc cleaning, testing on different platforms and compilers, adding extensive testing coverage, ...) but it is usuable right away, out of the box.

If you feel like supporting that kind of work, you may check it out and give some feedback, either here or as a Github issue if you spot one.



mercredi 18 avril 2018

C++: getting min/max values of boost::graph attributes

Just spend "some" time on some stupid issue on this problem, so I though I might as well post that here, if it can be useful to someone.

Say you have a boost::graph using so-called "bundled properties" (aka "inner properties) and you want to find the minimum and maximum value of the attributes. The standard library has this nice minmax_element() algorithm.

But... how can I use it on a graph inner properties ?

Say you have this kind of vertex, used in a graph definition (here undirected, but should also work for a directed graph):

struct vertex_properties
{
  int val;
};

typedef boost::adjacency_list<
    boost::vecS,
    boost::vecS,
    boost::undirectedS,
    vertex_properties
> graph_t;

typedef boost::graph_traits<graph_t>::vertex_descriptor vertex_t;

As an example, consider this program, creating a 3 vertices graph (and, yes, no edges here):

graph_t g;
    
vertex_t v1 = boost::add_vertex(g);
vertex_t v2 = boost::add_vertex(g);
vertex_t v3 = boost::add_vertex(g);
    
// Set vertex properties
g[v1].val = 1;
g[v2].val = 2;
g[v3].val = 3;

To find the min/max value of the attributes, just call the algorithm with the right lambda function:

auto pit = boost::vertices( g );
auto result = std::minmax_element(
 pit.first,
 pit.second,
 [&]                                              // lambda
 ( vertex_t v1, vertex_t v2 )
 {
  return( g[v1].val < g[v2].val );
 }
);

This will return a pair of iterators on the min and max graph indexes. So, to get the results:

    std::cout << "min=" << g[*result.first].val
     << " max=" << g[*result.second].val << '\n';

jeudi 4 décembre 2014

C++: erasing elements of std::vector using a lambda

Removing elements from a vector is a task that one can encounter pretty often and that isn't as easy as one could think.

The simplest case is when the index of the unwanted element is known. The std::vector class provides a first form of the erase() member function that takes an (const) iterator as argument.

Thus, if I want to remove, say the 10th element, it's as easy as:

    std::vector<whatever> myVec;
//... fill with more than 10 elements
    myVec.erase( myVec.begin() + 9 );

And if you want to remove the 3 elements between positions 10 and 12, it will be the second form of this function, which has two arguments:

    myVec.erase( myVec.begin() + 9, myVec.begin() + 12 );

(Yes, the second argument defines the first one you want to keep)

But what happens when you want to remove elements based on their value ? Say remove all elements that have value foo (assuming that value is of type whatever).

This is a task for std::remove(). It actually does not remove anything, it just switches element around so that the ones to be erased will be at the end, and it returns an iterator pointing on the first element to be erased. The next step is to feed that iterator to std::vector::erase().

The code will using its second form:

    myVec.erase(
        std::remove(            // returns iterator on
            myVec.begin(),      // first element to
            myVec.end(),        // be removed
            foo
        ),
        myVec.end()
    );

(This is known as the Erase–remove idiom.)

Next, what if you want to remove elements based on some property they have ? Consider for example a vector of vectors:

   std::vector<std::vector<Whatever>> myVec2;

And now the task is to remove elements that hold less than 2 elements. Okay, so we need to check every element, and decide to remove it or not.

This is a task for the second form of that same algorithm, remove_if(). Instead of a value, it takes as third argument a predicate, and will "remove" (move, actually) the considered element if that predicate returns "true". A predicate is usually implemented as a functor, which is an object of some class that defines the operator() and that returns a bool, based on the given value.

At first, this seems like a harsh constraint, as no one wants to declare a class for such a trivial task. But before C++11 came out, that was required (unless, maybe, using some Boost library). Or else, we needed to iterate through the vector and test each element, copy it or not, and swap:

   vector<vector<whatever>> newv;
   newv.reserve( myVec.size() ); // to avoid resizing when using push_back
      for( size_t i=0; i < myVec.size(); i++ )
         if( myVec[i].size()<MinSize )
            newv.push_back( myVec[i] );
   std::swap( myVec, newv );

This is where C++11 and lambdas come in. A lambda can be seen as a sort of "anonymous inline function", that captures variables in scope. Here, as the function iterates over all the elements, each of them will be a std::vector.

A lambda is made of three parts:

  • [how capturing variables happens],
  • (the functions arguments),
  • {The body of the function}.

The complete code:

   std::size_t MinSize = ... (some value);
   myVec2.erase(
      std::remove_if(
         myVec2.begin(),
         myVec2.end(),
         [&]( const std::vector<whatever>& vw ) // lambda
            { return vw.size() < minsize; } 
     ),
     myVec2.end()
   );

More on c++ lambdas.