Mass delete Github Workflow Run Logs with this script

Github workflow doesn’t allow mass deletion of Workflow Action run logs, it takes 2 clicks to delete each run log.

If you wanted to delete hundreds of these, the only way is to script something.

Luckily you can do so using the gh Github Command Line Tool and some json parsing using the jq tool.

Requirements
gh – brew install gh
jq – brew install jq

This is how a successful run is supposed to look like, it will delete 30 logs at the time:

Controlling Dopamine Levels with Gut-Based Nanobots: A New Approach to Parkinson’s Disease and Depression

Download PDF

Today, I spent nearly 3 hours writing a science fiction paper on nanobots that can synthesize hormones in the human gut. I used a tool called ChatGPT, which is a large language model trained by OpenAI. It was able to assist me in writing a comprehensive and detailed paper that covered various aspects of the technology, including the engineering and legal sides.


I was amazed at how quickly ChatGPT was able to generate content and how it was able to accurately incorporate the various details and technical terms that I provided. It was almost as if I had a team of Ph.D candidates and postdoc researchers working alongside me, helping me to write the paper.

As I was working on the paper, I couldn’t help but think about how powerful this technology could become in the future. With more advanced versions of ChatGPT, it is not hard to imagine a world where researchers are able to write high quality, factual papers in just a few hours. This would allow for a tremendous acceleration of knowledge and progress, as predicted by Kurzweil’s law of accelerating returns.

Overall, my experience with ChatGPT was extremely positive and I can’t wait to see what the future holds for this technology. It has the potential to revolutionize the way we conduct research and advance humanity’s understanding of the world.

Also, this blog post was created with ChatGPT.

What is the Rust equivalent to Java’s PrintWriter?

In Rust, the equivalent of Java’s PrintWriter is the std::io::Write trait, which is implemented by a number of types that can be used to write data to an output stream, such as a file or a network socket.

To use Write to write text to an output stream, you can use the write_all method, which takes a byte slice as an argument and writes it to the output stream.

You can convert a string to a byte slice using the as_bytes method.

Here is an example of how you might use Write to write text to a file:

use std::fs::File;
use std::io::Write;

fn main() -> std::io::Result<()> {
    let mut file = File::create("output.txt")?;
    file.write_all(b"Hello, world!")?;
    Ok(())
}

If you want to use a buffered writer, similar to PrintWriter, you can use the BufWriter type from the std::io::BufWriter module.

This type wraps a Write implementation and buffers the output, improving performance by reducing the number of calls to the underlying write operation.

Here is an example of how you might use BufWriter to write text to a file:

use std::fs::File;
use std::io::{BufWriter, Write};

fn main() -> std::io::Result<()> {
    let file = File::create("output.txt")?;
    let mut writer = BufWriter::new(file);
    writer.write_all(b"Hello, world!")?;
    writer.flush()?;
    Ok(())
}

You can also use the writeln! macro from the std::fmt module to write a line of text to an output stream.

This macro takes a Write implementation and a format string as arguments, and writes the formatted string to the output stream followed by a newline character.

Here is an example of how you might use writeln! to write a line of text to a file:

use std::fs::File;
use std::io::Write;

fn main() -> std::io::Result<()> {
    let mut file = File::create("output.txt")?;
    writeln!(file, "Hello, world!")?;
    Ok(())
}

The difference between a Slice and an Array in Rust

In Rust, a slice is a reference to a contiguous section of a larger data structure, such as an array or a vector.

It is represented using the syntax &[T], where T is the type of the elements in the slice.

A slice does not own the data it refers to, it just provides a way to access the data in the original data structure.

An array, on the other hand, is a fixed-size data structure that owns a contiguous block of memory.

It is represented using the syntax [T; N], where T is the type of the elements in the array and N is the size of the array.

An array is stored on the stack, so it has a fixed size that must be known at compile time.

One key difference between slices and arrays is that slices are dynamically sized, while arrays have a fixed size.

This means that you can create a slice that refers to a portion of an array, but you cannot create an array that refers to a portion of another array.

Here is an example that demonstrates the difference between slices and arrays:

let arr = [1, 2, 3, 4, 5];
let slice = &arr[1..3]; // slice contains the elements [2, 3]

Another difference between slices and arrays is that slices are more flexible and can be used with a wider range of functions and data structures. For example, you can pass a slice as an argument to a function, whereas you would have to pass an array by reference.

Slices are also more efficient to work with in certain cases, because they do not require the overhead of allocating and deallocating memory.

In general, slices are the more commonly used data type in Rust because they are more flexible and easier to work with than arrays. However, there are cases where using an array may be more appropriate, such as when you need to allocate a fixed-size data structure on the stack for performance reasons.

How to build your Docker image using the same Dockerfile regardless of the host architecture

Problem

If you are now using docker on a Mac M1 (arm64 platform), you don’t want to use amd64 as the architecture for your Linux Images.

You could have 2 lines on your Dockerfile and comment each one depending on where you’re building the image

Dockerfile

# Building on Apple Silicon host
FROM --platform=linux/arm64 ubuntu:20.04

# Building on Intel/x86_64 host
#FROM --platform=linux/amd64 ubuntu:20.04

Eventually this becomes very annoying.

Solution

You can pass a build time argument when you invoke docker build

Put this on your Dockerfile:

ARG BUILDPLATFORM
FROM --platform=linux/$BUILDPLATFORM ubuntu:20.04

On your docker_build_image.sh script:

export BUILDPLATFORM=`uname -m`
docker build --build-arg BUILDPLATFORM=${BUILDPLATFORM} -t myimagename .

Here’s my first very rough draft of “LifeTips”.

Free photos of Coffee
https://github.com/gubatron/LifeTips#life-tips

It is a short manuscript with actionable tips to live a better life, it’s there primarily as a manual for my kids when I die.

Advice is grouped into 6 sections:
TIME
BODY
MIND
MONEY
WORK/LEADERSHIP (Entrepreneurship)
PEOPLE

Most needed and valuable feedback is welcome.

If you have corrections, additions, feel free to Create an Issue, or just Fork the project, make a fix and send a pull request for review.

All contributors will be credited in the contributors page.

How to resize AWS ec2 ebs root partition without rebooting in 3 steps

Go to the AWS EBS dashboard and modify the volume size. Might be good to create a snapshot of it for safety but haven’t really failed ever doing this.

# 1. Check the device of your partition
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 28.1M 1 loop /snap/amazon-ssm-agent/2012
loop1 7:1 0 97M 1 loop /snap/core/9665
loop2 7:2 0 55M 1 loop /snap/core18/1880
loop3 7:3 0 71.3M 1 loop /snap/lxd/16100
xvda 202:0 0 25G 0 disk
└─xvda1 202:1 0 20G 0 part /
xvdf 202:80 0 1T 0 disk /mnt/ebs/frostwire-files
xvdg 202:96 0 16G 0 disk /mnt/ebs/oldroot

# 2. Grow the partition
$ sudo growpart /dev/xvda 1
CHANGED: partition=1 start=2048 old: size=41940959 end=41943007 new: size=52426719 end=52428767

# 3. Extend the file system
$ sudo resize2fs /dev/xvda1
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks = 3, new_desc_blocks = 4
...

# Done, new size is reflected with df
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 25G 19G 5.6G 78% /

Accessing and manipulating a 32bit integer as a byte array in C++ using unions

I don’t think I’ve ever used union for anything, but today I came across a very interesting use case to avoid bit-shifting tricks when dealing with data embedded in numbers.

What’s a union?

Microsoft defines it this way

union is a user-defined type in which all members share the same memory location. This definition means that at any given time, a union can contain no more than one object from its list of members. It also means that no matter how many members a union has, it always uses only enough memory to store the largest member.

Example:

[pastacode lang=”cpp” manual=”union%20IntChar%20%7B%0A%20%20%20%20unsigned%20int%20i%3B%0A%20%20%20%20char%20c%3B%0A%7D%3B%0A%0AIntChar%20foo%3B%0Afoo.i%20%3D%2065%3B%20%2F%2F%20’A’%20in%20ASCII%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%0Aprintf(%22i%3A%20%25d%2C%20c%3A%20%25c%5Cn%22%2C%20foo.i%2C%20foo.c)%3B%0A%0A%2F%2F%20i%3A%2065.%20c%3A%20A” message=”Represent an integer as a char, or char as int” highlight=”” provider=”manual”/]

We can also do the same with an anonymous union, and directly use the variables, which will change each other’s values

[pastacode lang=”cpp” manual=”union%20%7B%0A%20%20unsigned%20int%20i%3B%0A%20%20char%20c%3A%0A%7D%0Ac%3D’A’%3B%0Aprintf(%22i%3A%20%25d%2C%20f%3A%20%25c%5Cn%22%2C%20i%2C%20c)%3B%0A%2F%2F%20i%3A%2065%2C%20c%3A%20A” message=”” highlight=”” provider=”manual”/]

Let’s apply this feature to a 32 bit integer (4 bytes) and a 4 byte array

This I believe might come in handy if you need to use integers as arrays of 8 bit numbers because you can use the array ‘[]’ operator to access the individual bytes in the number without having to do bit shifting tricks (>>,<<,&,|) to extract them or manipulate them

[pastacode lang=”cpp” path_id=”51909d0741b8901a8e59d704104c2ef7″ file=”int_as_array.cpp” highlight=”” lines=”” provider=”gist”/]

 

Build and output:

$ g++ int_as_array.cpp && ./a.out
a: 0xaabbccdd
a[0]: 0xdd
a[1]: 0xcc
a[2]: 0xbb
a[3]: 0xaa
a: 0xaabbccff

DO NOT USE THIS TECHNIQUE IN PRODUCTION CODE:

Here’s a word about this trick from my very esteemed friend (and elite coder) Dave Nicponski

Things to remember when compiling/linking C/C++ software

Things to remember when compiling/linking C/C++ software

by Angel Leon. March 17, 2015;

Updated August 29, 2019.

Updated last on February 27, 2023

Include Paths

On the compilation phase, you will usually need to specify the different include paths so that the interfaces (.h, .hpp) which define structs, classes, constants, and functions can be found.

With gcc and llvm include paths are passed with -I/path/to/includes, you can pass as many -I as you need.

In Windows, cl.exe takes include paths with the following syntax:
/I"c:\path\to\includes\ you can also pass as many as you need.

Some software uses macro definition variables that should be passed during compile time to decide what code to include.

Compilation flags

These compilation-time variables are passed using -D,
e.g. -DMYSOFTWARE_COMPILATION_VARIABLE -DDO_SOMETHING=1 -DDISABLE_DEPRECATED_FUNCTIONS=0

These compilation time flags are by convention usually put into a single variable named CXXFLAGS, which is then passed to the compiler as a parameter for convenience when you’re building your compilation/make script.

Object files

When you compile your .c, or .cpp files, you will end up with object files.
These files usually have .o extensions on Linux, on Windows they might be under .obj extensions.

You can create an .o file for a single or for many source files.

Static Library files

When you have several .o files, you can put them together as a library, a static library. In Linux/Mac these static libraries are simply archive files, or .a files. In windows, static library files exist under the .lib extension.

They are created like this in Linux/Mac:

ar -cvq libctest.a ctest1.o ctest2.o ctest3.o

libctest.a will contain ctest1.o,ctest2.o and ctest2.o

They are created like this on Windows:

LIB.EXE /OUT:MYLIB.LIB FILE1.OBJ FILE2.OBJ FILE3.OBJ

When you are creating an executable that needs to make use of a library, if you use these static libraries, the size of your executable will be the sum of all the object files statically linked by the executable. The code is right there along the executable, it’s easier to distribute, but again, the size of the executable can be bigger than it needs to… why? because, sometimes, many of the .o files, or even the entire .a file you’re linking against might be a standard library that many other programs need.

Shared Libraries (Dynamic Libraries)

So shared or dynamic libraries were invented so that different programs or libraries would make external (shared) references to them, since they’re “shared” the symbols defined in them don’t need to be part of your executable or library.

Your executable contain symbols whose entry points or offset addresses might point to somewhere within themselves (symbols you defined in your code), but they will also have symbols defined in shared libraries. Shared libraries are only loaded once in physical memory by the OS, but its symbol’s offset are virtually mapped to the memory table of each process, so you’ll process will see the same library symbols in different addresses that some other process that uses the library.

Thus, not just making the size of your executable as small as it needs to be, but you won’t need to spend more physical memory loading the library for every process/program that needs its symbols.

On Linux shared files exist under the .so (shared object) file extension, on Mac .dylib (dynamic library), and in Windows they’re called .dll (dynamic link libraries)

Another cool thing about dynamic libraries, is that they can be loaded during runtime, not just linked at compile time. An example of runtime dynamic libraries are browser plugins.

In Linux, .so files are created like this:

gcc -Wall -fPIC -c *.c
gcc -shared -Wl,-soname,libctest.so.1 -o libctest.so.1.0   *.o
  • -Wall enables all warnings.
  • -c means compile only, don’t run the linker.
  • -fPIC means “Position Independent Code”, a requirement for shared libraries in Linux.
  • -shared makes the object file created shareable by different executables.
  • -Wl passes a comma separated list of arguments to the linker.
  • -soname means “shared object name” to use.
  • -o <my.so> means output, in this case the output shared library

In Mac .dylib files are created like this:

clang -dynamiclib -o libtest.dylib file1.o file2.o -L/some/library/path -lname_of_library_without_lib_prefix

In Windows, .dll files are created like this:

LINK.EXE /DLL /OUT:MYLIB.DLL FILE1.OBJ FILE2.OBJ FILE3OBJ

Linking to existing libraries

When linking your software you may be faced with a situation on which you want to link against several standard shared libraries.
If all the libraries you need exist in a single folder, you can set the LD_LIBRARY_PATH to that folder. By common standard all shared libraries are prefixed with the word lib. If a library exists in LD_LIBRARY_PATH and you want to link against it, you don’t need to pass the entire path to the library, you simply pass -lname and you will link your executable to the symbols of libname.so which should be somewhere inside LD_LIBRARY_PATH.

Tip: You should probably stay away from altering your LD_LIBRARY_PATH, if you do, make sure you keep its original value, and when you’re done restore it, as you might screw the build processes of other software in the system which might depend on what’s on the LD_LIBRARY_PATH.

What if libraries are in different folders?

If you have some other libbar.so library on another folder outside LD_LIBRARY_PATH you can explictly pass the full path to that library /path/to/that/other/library/libbar.so, or you can specify the folder that contains it -L/path/to/that/other/library and then the short hand form -lbar. This latter option makes more sense if the second folder contains several other libraries.

Useful tools

Sometimes you may be dealing with issues like undefined symbol errors, and you may want to inspect what symbols (functions) are defined in your library.

On Mac there’s otool, on Linux/Mac there’s nm, on Windows there’s depends.exe (a GUI tool that can be used to see both dependencies and the symbol’s tables. Taking a look at the “Entry Point” column will help you understand clearly the difference between symbols linking to a shared library vs symbols linking statically to the same library)

Useful command options

See shared library dependencies on Mac with otool

otool -L libjlibtorrent.dylib 
libjlibtorrent.dylib:
	libjlibtorrent.dylib (compatibility version 0.0.0, current version 0.0.0)
	/usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 120.0.0)
	/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1213.0.0)

See shared symbols with nm (Linux/Mac)
With nm, you can see the symbol’s name list.
Familiarize yourself with the meaning of the symbol types:

  • T (text section symbol)
  • U (undefined – useful for those undefined symbol error),
  • I (indirect symbol).

If the symbol is local (non-external) the symbol type is presented in lowercase letters, for example a lowercase u represents an undefined reference to a private external in another module in the same library.

nm‘s documentation says that if you’re working on Mac and you see that the symbol is preceeded by + or - it means it’s an ObjectiveC method, if you’re familiar with ObjectiveC you will know that + is for class methods and - is for instance methods, but in practice it seems to be a bit more explicit and you will often see objc or OBJC prefixed to those methods.

nm is best used along with grep 😉

Find all Undefined symbols

nm -u libMacOSXUtilsLeopard.jnilib
_CFRelease
_LSSharedFileListCopySnapshot
_LSSharedFileListCreate
_LSSharedFileListInsertItemURL
_LSSharedFileListItemRemove
_LSSharedFileListItemResolve
_NSFullUserName
_OBJC_CLASS_$_NSArray
_OBJC_CLASS_$_NSAutoreleasePool
_OBJC_CLASS_$_NSDictionary
_OBJC_CLASS_$_NSMutableArray
_OBJC_CLASS_$_NSMutableDictionary
_OBJC_CLASS_$_NSString
_OBJC_CLASS_$_NSURL
__Block_copy
__NSConcreteGlobalBlock
__dyld_register_func_for_add_image
__objc_empty_cache
__objc_empty_vtable
_calloc
_class_addMethod
_class_getInstanceMethod
_class_getInstanceSize
_class_getInstanceVariable
_class_getIvarLayout

My C++ code compiles but it won’t link

Linking is simply “linking” a bunch of .o files to make an executable.

Each one of these .o’s may be compiled on their own out of their .cpp files, but when one references symbols that are supposed to exist in other .o’s and they’re not to be found then you get linking errors.

Perhaps through forward declarations you managed your compilation phase to pass, but then you get a bunch of symbol not found errors.
Make sure to read them slowly, see where these symbols are being referenced, you will see that these issues occur due to namespace visibility in most cases.

Perhaps you copied the signature of a method that exists in a private space elsewhere into some other namespace where your code wasn’t compiling, all you did was make it compilable, but the actual symbol might not be visible outside the scope where it’s truly defined and implemented.

Function symbols can be private if they’re declared inside anonymous namespaces, or if they’re declared as static functions.

An example:

Undefined symbols for architecture x86_64:
  "FlushStateToDisk(CValidationState&, FlushStateMode)", referenced from:
      Network::TxMessage::handle(CNode*, CDataStream&, long long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&, bool, bool) in libbitcoin_server.a(libbitcoin_server_a-TxMessage.o)

Here, when I read the code of Network::TxMessage::handle(...) there was a call to FlushStateToDisk, which was declared in main.h, and coded in main.cpp. My TxMessage.cpp did include main.h, the compilation was fine, I had a TxMessage.o file and a main.o, but the linker was complaining.

The issue was that FlushStateToDisk was declared as a static, therefore only visible inside main.o, once I removed the static from the declaration and implementation the error went away and my executable was linked. Similar things happen when functions are declared in anonymous spaces in other files, even if you forward declare them on your local .h

In other cases your code compiles and you get this error linking because your library can’t be added using -lfoo, and adding its containing folder to -L doesn’t cut it, in this case you just add the full path to the library in your compilation command: gcc /path/to/the/missing/library.o ... my_source.cpp -o my_executable

Reminder:

DO NOT EXPORT CFLAGS, CPPFLAGS and the like on your .bash_profile/.bashrc, it can lead to unintended building consequences in many projects. I’ve wasted so many hours due to this mistake.