Today, I spent nearly 3 hours writing a science fiction paper on nanobots that can synthesize hormones in the human gut. I used a tool called ChatGPT, which is a large language model trained by OpenAI. It was able to assist me in writing a comprehensive and detailed paper that covered various aspects of the technology, including the engineering and legal sides.
I was amazed at how quickly ChatGPT was able to generate content and how it was able to accurately incorporate the various details and technical terms that I provided. It was almost as if I had a team of Ph.D candidates and postdoc researchers working alongside me, helping me to write the paper.
As I was working on the paper, I couldn’t help but think about how powerful this technology could become in the future. With more advanced versions of ChatGPT, it is not hard to imagine a world where researchers are able to write high quality, factual papers in just a few hours. This would allow for a tremendous acceleration of knowledge and progress, as predicted by Kurzweil’s law of accelerating returns.
Overall, my experience with ChatGPT was extremely positive and I can’t wait to see what the future holds for this technology. It has the potential to revolutionize the way we conduct research and advance humanity’s understanding of the world.
In Rust, the equivalent of Java’s PrintWriter is the std::io::Write trait, which is implemented by a number of types that can be used to write data to an output stream, such as a file or a network socket.
To use Write to write text to an output stream, you can use the write_all method, which takes a byte slice as an argument and writes it to the output stream.
You can convert a string to a byte slice using the as_bytes method.
Here is an example of how you might use Write to write text to a file:
use std::fs::File;
use std::io::Write;
fn main() -> std::io::Result<()> {
let mut file = File::create("output.txt")?;
file.write_all(b"Hello, world!")?;
Ok(())
}
If you want to use a buffered writer, similar to PrintWriter, you can use the BufWriter type from the std::io::BufWriter module.
This type wraps a Write implementation and buffers the output, improving performance by reducing the number of calls to the underlying write operation.
Here is an example of how you might use BufWriter to write text to a file:
use std::fs::File;
use std::io::{BufWriter, Write};
fn main() -> std::io::Result<()> {
let file = File::create("output.txt")?;
let mut writer = BufWriter::new(file);
writer.write_all(b"Hello, world!")?;
writer.flush()?;
Ok(())
}
You can also use the writeln! macro from the std::fmt module to write a line of text to an output stream.
This macro takes a Write implementation and a format string as arguments, and writes the formatted string to the output stream followed by a newline character.
Here is an example of how you might use writeln! to write a line of text to a file:
use std::fs::File;
use std::io::Write;
fn main() -> std::io::Result<()> {
let mut file = File::create("output.txt")?;
writeln!(file, "Hello, world!")?;
Ok(())
}
In Rust, a slice is a reference to a contiguous section of a larger data structure, such as an array or a vector.
It is represented using the syntax &[T], where T is the type of the elements in the slice.
A slice does not own the data it refers to, it just provides a way to access the data in the original data structure.
An array, on the other hand, is a fixed-size data structure that owns a contiguous block of memory.
It is represented using the syntax [T; N], where T is the type of the elements in the array and N is the size of the array.
An array is stored on the stack, so it has a fixed size that must be known at compile time.
One key difference between slices and arrays is that slices are dynamically sized, while arrays have a fixed size.
This means that you can create a slice that refers to a portion of an array, but you cannot create an array that refers to a portion of another array.
Here is an example that demonstrates the difference between slices and arrays:
let arr = [1, 2, 3, 4, 5];
let slice = &arr[1..3]; // slice contains the elements [2, 3]
Another difference between slices and arrays is that slices are more flexible and can be used with a wider range of functions and data structures. For example, you can pass a slice as an argument to a function, whereas you would have to pass an array by reference.
Slices are also more efficient to work with in certain cases, because they do not require the overhead of allocating and deallocating memory.
In general, slices are the more commonly used data type in Rust because they are more flexible and easier to work with than arrays. However, there are cases where using an array may be more appropriate, such as when you need to allocate a fixed-size data structure on the stack for performance reasons.
I don’t think I’ve ever used union for anything, but today I came across a very interesting use case to avoid bit-shifting tricks when dealing with data embedded in numbers.
A union is a user-defined type in which all members share the same memory location. This definition means that at any given time, a union can contain no more than one object from its list of members. It also means that no matter how many members a union has, it always uses only enough memory to store the largest member.
Example:
[pastacode lang=”cpp” manual=”union%20IntChar%20%7B%0A%20%20%20%20unsigned%20int%20i%3B%0A%20%20%20%20char%20c%3B%0A%7D%3B%0A%0AIntChar%20foo%3B%0Afoo.i%20%3D%2065%3B%20%2F%2F%20’A’%20in%20ASCII%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%0Aprintf(%22i%3A%20%25d%2C%20c%3A%20%25c%5Cn%22%2C%20foo.i%2C%20foo.c)%3B%0A%0A%2F%2F%20i%3A%2065.%20c%3A%20A” message=”Represent an integer as a char, or char as int” highlight=”” provider=”manual”/]
We can also do the same with an anonymous union, and directly use the variables, which will change each other’s values
Let’s apply this feature to a 32 bit integer (4 bytes) and a 4 byte array
This I believe might come in handy if you need to use integers as arrays of 8 bit numbers because you can use the array ‘[]’ operator to access the individual bytes in the number without having to do bit shifting tricks (>>,<<,&,|) to extract them or manipulate them
I should clarify. You basically CANNOT do this safely in general. Specifically, "It's undefined behavior to read from the member of the union that wasn't most recently written." (with very restrictive exceptions). You may see "reasonable" behavior depending on the compiler…
…used, which may provide additional guarantees not present in the C++ spec, but it's not portable, and a compliant compiler would be allowed to do just about anything it wants to break your program (like return a zero value on the read, or skip the preceding write, etc).
Things to remember when compiling/linking C/C++ software
by Angel Leon. March 17, 2015;
Updated August 29, 2019.
Updated last on February 27, 2023
Include Paths
On the compilation phase, you will usually need to specify the different include paths so that the interfaces (.h, .hpp) which define structs, classes, constants, and functions can be found.
With gcc and llvm include paths are passed with -I/path/to/includes, you can pass as many -I as you need.
In Windows, cl.exe takes include paths with the following syntax: /I"c:\path\to\includes\ you can also pass as many as you need.
Some software uses macro definition variables that should be passed during compile time to decide what code to include.
Compilation flags
These compilation-time variables are passed using -D,
e.g. -DMYSOFTWARE_COMPILATION_VARIABLE-DDO_SOMETHING=1-DDISABLE_DEPRECATED_FUNCTIONS=0
These compilation time flags are by convention usually put into a single variable named CXXFLAGS, which is then passed to the compiler as a parameter for convenience when you’re building your compilation/make script.
Object files
When you compile your .c, or .cpp files, you will end up with object files.
These files usually have .o extensions on Linux, on Windows they might be under .obj extensions.
You can create an .o file for a single or for many source files.
Static Library files
When you have several .o files, you can put them together as a library, a static library. In Linux/Mac these static libraries are simply archive files, or .a files. In windows, static library files exist under the .lib extension.
They are created like this in Linux/Mac:
ar -cvq libctest.a ctest1.o ctest2.o ctest3.o
libctest.a will contain ctest1.o,ctest2.o and ctest2.o
When you are creating an executable that needs to make use of a library, if you use these static libraries, the size of your executable will be the sum of all the object files statically linked by the executable. The code is right there along the executable, it’s easier to distribute, but again, the size of the executable can be bigger than it needs to… why? because, sometimes, many of the .o files, or even the entire .a file you’re linking against might be a standard library that many other programs need.
Shared Libraries (Dynamic Libraries)
So shared or dynamic libraries were invented so that different programs or libraries would make external (shared) references to them, since they’re “shared” the symbols defined in them don’t need to be part of your executable or library.
Your executable contain symbols whose entry points or offset addresses might point to somewhere within themselves (symbols you defined in your code), but they will also have symbols defined in shared libraries. Shared libraries are only loaded once in physical memory by the OS, but its symbol’s offset are virtually mapped to the memory table of each process, so you’ll process will see the same library symbols in different addresses that some other process that uses the library.
Thus, not just making the size of your executable as small as it needs to be, but you won’t need to spend more physical memory loading the library for every process/program that needs its symbols.
On Linux shared files exist under the .so (shared object) file extension, on Mac .dylib (dynamic library), and in Windows they’re called .dll (dynamic link libraries)
Another cool thing about dynamic libraries, is that they can be loaded during runtime, not just linked at compile time. An example of runtime dynamic libraries are browser plugins.
When linking your software you may be faced with a situation on which you want to link against several standard shared libraries.
If all the libraries you need exist in a single folder, you can set the LD_LIBRARY_PATH to that folder. By common standard all shared libraries are prefixed with the word lib. If a library exists in LD_LIBRARY_PATH and you want to link against it, you don’t need to pass the entire path to the library, you simply pass -lname and you will link your executable to the symbols of libname.so which should be somewhere inside LD_LIBRARY_PATH.
Tip: You should probably stay away from altering your LD_LIBRARY_PATH, if you do, make sure you keep its original value, and when you’re done restore it, as you might screw the build processes of other software in the system which might depend on what’s on the LD_LIBRARY_PATH.
What if libraries are in different folders?
If you have some other libbar.so library on another folder outside LD_LIBRARY_PATH you can explictly pass the full path to that library /path/to/that/other/library/libbar.so, or you can specify the folder that contains it -L/path/to/that/other/library and then the short hand form -lbar. This latter option makes more sense if the second folder contains several other libraries.
Useful tools
Sometimes you may be dealing with issues like undefined symbol errors, and you may want to inspect what symbols (functions) are defined in your library.
On Mac there’s otool, on Linux/Mac there’s nm, on Windows there’s depends.exe (a GUI tool that can be used to see both dependencies and the symbol’s tables. Taking a look at the “Entry Point” column will help you understand clearly the difference between symbols linking to a shared library vs symbols linking statically to the same library)
Useful command options
See shared library dependencies on Mac with otool
otool -L libjlibtorrent.dylib
libjlibtorrent.dylib:
libjlibtorrent.dylib (compatibility version 0.0.0, current version 0.0.0)
/usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 120.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1213.0.0)
See shared symbols with nm (Linux/Mac)
With nm, you can see the symbol’s name list.
Familiarize yourself with the meaning of the symbol types:
T (text section symbol)
U (undefined – useful for those undefined symbol error),
I (indirect symbol).
If the symbol is local (non-external) the symbol type is presented in lowercase letters, for example a lowercase u represents an undefined reference to a private external in another module in the same library.
nm‘s documentation says that if you’re working on Mac and you see that the symbol is preceeded by + or - it means it’s an ObjectiveC method, if you’re familiar with ObjectiveC you will know that + is for class methods and - is for instance methods, but in practice it seems to be a bit more explicit and you will often see objc or OBJC prefixed to those methods.
Linking is simply “linking” a bunch of .o files to make an executable.
Each one of these .o’s may be compiled on their own out of their .cpp files, but when one references symbols that are supposed to exist in other .o’s and they’re not to be found then you get linking errors.
Perhaps through forward declarations you managed your compilation phase to pass, but then you get a bunch of symbol not found errors.
Make sure to read them slowly, see where these symbols are being referenced, you will see that these issues occur due to namespace visibility in most cases.
Perhaps you copied the signature of a method that exists in a private space elsewhere into some other namespace where your code wasn’t compiling, all you did was make it compilable, but the actual symbol might not be visible outside the scope where it’s truly defined and implemented.
Function symbols can be private if they’re declared inside anonymous namespaces, or if they’re declared as static functions.
An example:
Undefined symbols for architecture x86_64:
"FlushStateToDisk(CValidationState&, FlushStateMode)", referenced from:
Network::TxMessage::handle(CNode*, CDataStream&, long long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&, bool, bool) in libbitcoin_server.a(libbitcoin_server_a-TxMessage.o)
Here, when I read the code of Network::TxMessage::handle(...) there was a call to FlushStateToDisk, which was declared in main.h, and coded in main.cpp. My TxMessage.cpp did include main.h, the compilation was fine, I had a TxMessage.o file and a main.o, but the linker was complaining.
The issue was that FlushStateToDisk was declared as a static, therefore only visible inside main.o, once I removed the static from the declaration and implementation the error went away and my executable was linked. Similar things happen when functions are declared in anonymous spaces in other files, even if you forward declare them on your local .h
In other cases your code compiles and you get this error linking because your library can’t be added using -lfoo, and adding its containing folder to -L doesn’t cut it, in this case you just add the full path to the library in your compilation command: gcc /path/to/the/missing/library.o ... my_source.cpp -o my_executable
Reminder:
DO NOT EXPORT CFLAGS, CPPFLAGS and the like on your .bash_profile/.bashrc, it can lead to unintended building consequences in many projects. I’ve wasted so many hours due to this mistake.