Best Linux Hex Editor: Top 20 Linux Hex Viewers & Editors ...


glimpse into the future of Roblox

Our vision to bring the world together through play has never been more relevant than it is now. As our founder and CEO, David Baszucki (a.k.a. Builderman), mentioned in his keynote, more and more people are using Roblox to stay connected with their friends and loved ones. He hinted at a future where, with our automatic machine translation technology, Roblox will one day act as a universal translator, enabling people from different cultures and backgrounds to connect and learn from each other.
During his keynote, Builderman also elaborated upon our vision to build the Metaverse; the future of avatar creation on the platform (infinitely customizable avatars that allow any body, any clothing, and any animation to come together seamlessly); more personalized game discovery; and simulating large social gatherings (like concerts, graduations, conferences, etc.) with tens of thousands of participants all in one server. We’re still very early on in this journey, but if these past five months have shown us anything, it’s clear that there is a growing need for human co-experience platforms like Roblox that allow people to play, create, learn, work, and share experiences together in a safe, civil 3D immersive space.
Up next, our VP of Developer Relations, Matt Curtis (a.k.a. m4rrh3w), shared an update on all the things we’re doing to continue empowering developers to create innovative and exciting content through collaboration, support, and expertise. He also highlighted some of the impressive milestones our creator community has achieved since last year’s RDC. Here are a few key takeaways:
And lastly, our VP of Engineering, Technology, Adam Miller (a.k.a. rbadam), unveiled a myriad of cool and upcoming features developers will someday be able to sink their teeth into. We saw a glimpse of procedural skies, skinned meshes, more high-quality materials, new terrain types, more fonts in Studio, a new asset type for in-game videos, haptic feedback on mobile, real-time CSG operations, and many more awesome tools that will unlock the potential for even bigger, more immersive experiences on Roblox.


Despite the virtual setting, RDC just wouldn’t have been the same without any fun party activities and networking opportunities. So, we invited special guests DJ Hyper Potions and cyber mentalist Colin Cloud for some truly awesome, truly mind-bending entertainment. Yoga instructor Erin Gilmore also swung by to inspire attendees to get out of their chair and get their body moving. And of course, we even had virtual rooms dedicated to karaoke and head-to-head social games, like trivia and Pictionary.
Over on the networking side, Team Adopt Me, Red Manta, StyLiS Studios, and Summit Studios hosted a virtual booth for attendees to ask questions, submit resumes, and more. We also had a networking session where three participants would be randomly grouped together to get to know each other.

What does Roblox mean to you?

We all know how talented the Roblox community is from your creations. We’ve heard plenty of stories over the years about how Roblox has touched your lives, how you’ve made friendships, learned new skills, or simply found a place where you can be yourself. We wanted to hear more. So, we asked attendees: What does Roblox mean to you? How has Roblox connected you? How has Roblox changed your life? Then, over the course of RDC, we incorporated your responses into this awesome mural.
Created by Alece Birnbach at Graphic Recording Studio

Knowledge is power

This year’s breakout sessions included presentations from Roblox developers and staff members on the latest game development strategies, a deep dive into the Roblox engine, learning how to animate with Blender, tools for working together in teams, building performant game worlds, and the new Creator Dashboard. Dr. Michael Rich, Associate Professor at Harvard Medical School and Physician at Boston Children’s Hospital, also led attendees through a discussion on mental health and how to best take care of you and your friends’ emotional well-being, especially now during these challenging times.
Making the Dream Work with Teamwork (presented by Roblox developer Myzta)
In addition to our traditional Q&A panel with top product and engineering leaders at Roblox, we also held a special session with Builderman himself to answer the community’s biggest questions.
Roblox Product and Engineering Q&A Panel

2020 Game Jam

The Game Jam is always one of our favorite events of RDC. It’s a chance for folks to come together, flex their development skills, and come up with wildly inventive game ideas that really push the boundaries of what’s possible on Roblox. We had over 60 submissions this year—a new RDC record.
Once again, teams of up to six people from around the world had less than 24 hours to conceptualize, design, and publish a game based on the theme “2020 Vision,” all while working remotely no less! To achieve such a feat is nothing short of awe-inspiring, but as always, our dev community was more than up for the challenge. I’ve got to say, these were some of the finest creations we’ve seen.
Best in Show: Shapescape Created By: GhettoMilkMan, dayzeedog, maplestick, theloudscream, Brick_man, ilyannna You awaken in a strange laboratory, seemingly with no way out. Using a pair of special glasses, players must solve a series of anamorphic puzzles and optical illusions to make their escape.
Excellence in Visual Art: agn●sia Created By: boatbomber, thisfall, Elttob An obby experience unlike any other, this game is all about seeing the world through a different lens. Reveal platforms by switching between different colored lenses and make your way to the end.
Most Creative Gameplay: Visions of a perspective reality Created By: Noble_Draconian and Spathi Sometimes all it takes is a change in perspective to solve challenges. By switching between 2D and 3D perspectives, players can maneuver around obstacles or find new ways to reach the end of each level.
Outstanding Use of Tech: The Eyes of Providence Created By: Quenty, Arch_Mage, AlgyLacey, xJennyBeanx, Zomebody, Crykee This action/strategy game comes with a unique VR twist. While teams fight to construct the superior monument, two VR players can support their minions by collecting resources and manipulating the map.
Best Use of Theme: Sticker Situation Created By: dragonfrosting and Yozoh Set in a mysterious art gallery, players must solve puzzles by manipulating the environment using a magic camera and stickers. Snap a photograph, place down a sticker, and see how it changes the world.
For the rest of the 2020 Game Jam submissions, check out the list below:
20-20 Vision | 20/20 Vision | 2020 Vision, A Crazy Perspective | 2020 Vision: Nyon | A Wild Trip! | Acuity | Best Year Ever | Better Half | Bloxlabs | Climb Stairs to 2021 | Double Vision (Team hey apple) | Eyebrawl | Eyeworm Exam | FIRE 2020 | HACKED | Hyperspective | Lucid Scream | Mystery Mansion | New Years at the Museum | New Year’s Bash | Poor Vision | Predict 2020 | RBC News | Retrovertigo | Second Wave | see no evil | Sight Fight | Sight Stealers | Spectacles Struggle | Specter Spectrum | Survive 2020 | The Lost Chicken Leg | The Outbreak | The Spyglass | Time Heist | Tunnel Vision | Virtual RDC – The Story | Vision (Team Freepunk) | Vision (Team VIP People ####) | Vision Developers Conference 2020 | Vision Is Key | Vision Perspective | Vision Racer | Visions | Zepto
And last but not least, we wanted to give a special shout out to Starboard Studios. Though they didn’t quite make it on time for our judges, we just had to include Dave’s Vision for good measure. 📷
Thanks to everyone who participated in the Game Jam, and congrats to all those who took home the dub in each of our categories this year. As the winners of Best in Show, the developers of Shapescape will have their names forever engraved on the RDC Game Jam trophy back at Roblox HQ. Great work!

‘Til next year

And that about wraps up our coverage of the first-ever digital RDC. Thanks to all who attended! Before we go, we wanted to share a special “behind the scenes” video from the 2020 RDC photoshoot.
Check it out:
It was absolutely bonkers. Getting 350 of us all in one server was so much fun and really brought back the feeling of being together with everyone again. That being said, we can’t wait to see you all—for real this time—at RDC next year. It’s going to be well worth the wait. ‘Til we meet again, my friends.
© 2020 Roblox Corporation. All Rights Reserved.

Improving Simulation and Performance with an Advanced Physics Solver


05, 2020

by chefdeletat
📷In mid-2015, Roblox unveiled a major upgrade to its physics engine: the Projected Gauss-Seidel (PGS) physics solver. For the first year, the new solver was optional and provided improved fidelity and greater performance compared to the previously used spring solver.
In 2016, we added support for a diverse set of new physics constraints, incentivizing developers to migrate to the new solver and extending the creative capabilities of the physics engine. Any new places used the PGS solver by default, with the option of reverting back to the classic solver.
We ironed out some stability issues associated with high mass differences and complex mechanisms by the introduction of the hybrid LDL-PGS solver in mid-2018. This made the old solver obsolete, and it was completely disabled in 2019, automatically migrating all places to the PGS.
In 2019, the performance was further improved using multi-threading that splits the simulation into jobs consisting of connected islands of simulating parts. We still had performance issues related to the LDL that we finally resolved in early 2020.
The physics engine is still being improved and optimized for performance, and we plan on adding new features for the foreseeable future.

Implementing the Laws of Physics

The main objective of a physics engine is to simulate the motion of bodies in a virtual environment. In our physics engine, we care about bodies that are rigid, that collide and have constraints with each other.
A physics engine is organized into two phases: collision detection and solving. Collision detection finds intersections between geometries associated with the rigid bodies, generating appropriate collision information such as collision points, normals and penetration depths. Then a solver updates the motion of rigid bodies under the influence of the collisions that were detected and constraints that were provided by the user.
The motion is the result of the solver interpreting the laws of physics, such as conservation of energy and momentum. But doing this 100% accurately is prohibitively expensive, and the trick to simulating it in real-time is to approximate to increase performance, as long as the result is physically realistic. As long as the basic laws of motion are maintained within a reasonable tolerance, this tradeoff is completely acceptable for a computer game simulation.

Taking Small Steps

The main idea of the physics engine is to discretize the motion using time-stepping. The equations of motion of constrained and unconstrained rigid bodies are very difficult to integrate directly and accurately. The discretization subdivides the motion into small time increments, where the equations are simplified and linearized making it possible to solve them approximately. This means that during each time step the motion of the relevant parts of rigid bodies that are involved in a constraint is linearly approximated.
Although a linearized problem is easier to solve, it produces drift in a simulation containing non-linear behaviors, like rotational motion. Later we’ll see mitigation methods that help reduce the drift and make the simulation more plausible.


Having linearized the equations of motion for a time step, we end up needing to solve a linear system or linear complementarity problem (LCP). These systems can be arbitrarily large and can still be quite expensive to solve exactly. Again the trick is to find an approximate solution using a faster method. A modern method to approximately solve an LCP with good convergence properties is the Projected Gauss-Seidel (PGS). It is an iterative method, meaning that with each iteration the approximate solution is brought closer to the true solution, and its final accuracy depends on the number of iterations.
This animation shows how a PGS solver changes the positions of the bodies at each step of the iteration process, the objective being to find the positions that respect the ball and socket constraints while preserving the center of mass at each step (this is a type of positional solver used by the IK dragger). Although this example has a simple analytical solution, it’s a good demonstration of the idea behind the PGS. At each step, the solver fixes one of the constraints and lets the other be violated. After a few iterations, the bodies are very close to their correct positions. A characteristic of this method is how some rigid bodies seem to vibrate around their final position, especially when coupling interactions with heavier bodies. If we don’t do enough iterations, the yellow part might be left in a visibly invalid state where one of its two constraints is dramatically violated. This is called the high mass ratio problem, and it has been the bane of physics engines as it causes instabilities and explosions. If we do too many iterations, the solver becomes too slow, if we don’t it becomes unstable. Balancing the two sides has been a painful and long process.

Mitigation Strategies

📷A solver has two major sources of inaccuracies: time-stepping and iterative solving (there is also floating point drift but it’s minor compared to the first two). These inaccuracies introduce errors in the simulation causing it to drift from the correct path. Some of this drift is tolerable like slightly different velocities or energy loss, but some are not like instabilities, large energy gains or dislocated constraints.
Therefore a lot of the complexity in the solver comes from the implementation of methods to minimize the impact of computational inaccuracies. Our final implementation uses some traditional and some novel mitigation strategies:
  1. Warm starting: starting with the solution from a previous time-step to increase the convergence rate of the iterative solver
  2. Post-stabilization: reprojecting the system back to the constraint manifold to prevent constraint drift
  3. Regularization: adding compliance to the constraints ensuring a solution exists and is unique
  4. Pre-conditioning: using an exact solution to a linear subsystem, improving the stability of complex mechanisms
Strategies 1, 2 and 3 are pretty traditional, but 3 has been improved and perfected by us. Also, although 4 is not unheard of, we haven’t seen any practical implementation of it. We use an original factorization method for large sparse constraint matrices and a new efficient way of combining it with the PGS. The resulting implementation is only slightly slower compared to pure PGS but ensures that the linear system coming from equality constraints is solved exactly. Consequently, the equality constraints suffer only from drift coming from the time discretization. Details on our methods are contained in my GDC 2020 presentation. Currently, we are investigating direct methods applied to inequality constraints and collisions.

Getting More Details

Traditionally there are two mathematical models for articulated mechanisms: there are reduced coordinate methods spearheaded by Featherstone, that parametrize the degrees of freedom at each joint, and there are full coordinate methods that use a Lagrangian formulation.
We use the second formulation as it is less restrictive and requires much simpler mathematics and implementation.
The Roblox engine uses analytical methods to compute the dynamic response of constraints, as opposed to penalty methods that were used before. Analytics methods were initially introduced in Baraff 1989, where they are used to treat both equality and non-equality constraints in a consistent manner. Baraff observed that the contact model can be formulated using quadratic programming, and he provided a heuristic solution method (which is not the method we use in our solver).
Instead of using force-based formulation, we use an impulse-based formulation in velocity space, originally introduced by Mirtich-Canny 1995 and further improved by Stewart-Trinkle 1996, which unifies the treatment of different contact types and guarantees the existence of a solution for contacts with friction. At each timestep, the constraints and collisions are maintained by applying instantaneous changes in velocities due to constraint impulses. An excellent explanation of why impulse-based simulation is superior is contained in the GDC presentation of Catto 2014.
The frictionless contacts are modeled using a linear complementarity problem (LCP) as described in Baraff 1994. Friction is added as a non-linear projection onto the friction cone, interleaved with the iterations of the Projected Gauss-Seidel.
The numerical drift that introduces positional errors in the constraints is resolved using a post-stabilization technique using pseudo-velocities introduced by Cline-Pai 2003. It involves solving a second LCP in the position space, which projects the system back to the constraint manifold.
The LCPs are solved using a PGS / Impulse Solver popularized by Catto 2005 (also see Catto 2009). This method is iterative and considers each individual constraints in sequence and resolves it independently. Over many iterations, and in ideal conditions, the system converges to a global solution.
Additionally, high mass ratio issues in equality constraints are ironed out by preconditioning the PGS using the sparse LDL decomposition of the constraint matrix of equality constraints. Dense submatrices of the constraint matrix are sparsified using a method we call Body Splitting. This is similar to the LDL decomposition used in Baraff 1996, but allows more general mechanical systems, and solves the system in constraint space. For more information, you can see my GDC 2020 presentation.
The architecture of our solver follows the idea of Guendelman-Bridson-Fedkiw, where the velocity and position stepping are separated by the constraint resolution. Our time sequencing is:
  1. Advance velocities
  2. Constraint resolution in velocity space and position space
  3. Advance positions
This scheme has the advantage of integrating only valid velocities, and limiting latency in external force application but allowing a small amount of perceived constraint violation due to numerical drift.
An excellent reference for rigid body simulation is the book Erleben 2005 that was recently made freely available. You can find online lectures about physics-based animation, a blog by Nilson Souto on building a physics engine, a very good GDC presentation by Erin Catto on modern solver methods, and forums like the Bullet Physics Forum and GameDev which are excellent places to ask questions.

In Conclusion

The field of game physics simulation presents many interesting problems that are both exciting and challenging. There are opportunities to learn a substantial amount of cool mathematics and physics and to use modern optimizations techniques. It’s an area of game development that tightly marries mathematics, physics and software engineering.
Even if Roblox has a good rigid body physics engine, there are areas where it can be improved and optimized. Also, we are working on exciting new projects like fracturing, deformation, softbody, cloth, aerodynamics and water simulation.
Neither Roblox Corporation nor this blog endorses or supports any company or service. Also, no guarantees or promises are made regarding the accuracy, reliability or completeness of the information contained in this blog.
This blog post was originally published on the Roblox Tech Blog.
© 2020 Roblox Corporation. All Rights Reserved.

Using Clang to Minimize Global Variable Use


23, 2020

by RandomTruffle
Every non-trivial program has at least some amount of global state, but too much can be a bad thing. In C++ (which constitutes close to 100% of Roblox’s engine code) this global state is initialized before main() and destroyed after returning from main(), and this happens in a mostly non-deterministic order. In addition to leading to confusing startup and shutdown semantics that are difficult to reason about (or change), it can also lead to severe instability.
Roblox code also creates a lot of long-running detached threads (threads which are never joined and just run until they decide to stop, which might be never). These two things together have a very serious negative interaction on shutdown, because long-running threads continue accessing the global state that is being destroyed. This can lead to elevated crash rates, test suite flakiness, and just general instability.
The first step to digging yourself out of a mess like this is to understand the extent of the problem, so in this post I’m going to talk about one technique you can use to gain visibility into your global startup flow. I’m also going to discuss how we are using this to improve stability across the entire Roblox game engine platform by decreasing our use of global variables.

Introducing -finstrument-functions

Nothing excites me more than learning about a new obscure compiler option that I’ve never had a use for before, so I was pretty happy when a colleague pointed me to this option in the Clang Command Line Reference. I’d never used it before, but it sounded very cool. The idea being that if we could get the compiler to tell us every time it entered and exited a function, we could filter this information through a symbolizer of some kind and generate a report of functions that a) occur before main(), and b) are the very first function in the call-stack (indicating it’s a global).
Unfortunately, the documentation basically just tells you that the option exists with no mention of how to use it or if it even actually does what it sounds like it does. There’s also two different options that sound similar to each other (-finstrument-functions and -finstrument-functions-after-inlining), and I still wasn’t entirely sure what the difference was. So I decided to throw up a quick sample on godbolt to see what happened, which you can see here. Note there are two assembly outputs for the same source listing. One uses the first option and the other uses the second option, and we can compare the assembly output to understand the differences. We can gather a few takeaways from this sample:
  1. The compiler is injecting calls to __cyg_profile_func_enter and __cyg_profile_func_exit inside of every function, inline or not.
  2. The only difference between the two options occurs at the call-site of an inline function.
  3. With -finstrument-functions, the instrumentation for the inlined function is inserted at the call-site, whereas with -finstrument-functions-after-inlining we only have instrumentation for the outer function. This means that when using-finstrument-functions-after-inlining you won’t be able to determine which functions are inlined and where.
Of course, this sounds exactly like what the documentation said it did, but sometimes you just need to look under the hood to convince yourself.
To put all of this another way, if we want to know about calls to inline functions in this trace we need to use -finstrument-functions because otherwise their instrumentation is silently removed by the compiler. Sadly, I was never able to get -finstrument-functions to work on a real example. I would always end up with linker errors deep in the Standard C++ Library which I was unable to figure out. My best guess is that inlining is often a heuristic, and this can somehow lead to subtle ODR (one-definition rule) violations when the optimizer makes different inlining decisions from different translation units. Luckily global constructors (which is what we care about) cannot possibly be inlined anyway, so this wasn’t a problem.
I suppose I should also mention that I still got tons of linker errors with -finstrument-functions-after-inlining as well, but I did figure those out. As best as I can tell, this option seems to imply –whole-archive linker semantics. Discussion of –whole-archive is outside the scope of this blog post, but suffice it to say that I fixed it by using linker groups (e.g. -Wl,–start-group and -Wl,–end-group) on the compiler command line. I was a bit surprised that we didn’t get these same linker errors without this option and still don’t totally understand why. If you happen to know why this option would change linker semantics, please let me know in the comments!

Implementing the Callback Hooks

If you’re astute, you may be wondering what in the world __cyg_profile_func_enter and __cyg_profile_func_exit are and why the program is even successfully linking in the first without giving undefined symbol reference errors, since the compiler is apparently trying to call some function we’ve never defined. Luckily, there are some options that allow us to see inside the linker’s algorithm so we can find out where it’s getting this symbol from to begin with. Specifically, -y should tell us how the linker is resolving . We’ll try it with a dummy program first and a symbol that we’ve defined ourselves, then we’ll try it with __cyg_profile_func_enter .
[email protected]:~/src/sandbox$ cat instr.cpp int main() {} [email protected]:~/src/sandbox$ clang++-9 -fuse-ld=lld -Wl,-y -Wl,main instr.cpp /usbin/../lib/gcc/x86_64-linux-gnu/crt1.o: reference to main /tmp/instr-5b6c60.o: definition of main
No surprises here. The C Runtime Library references main(), and our object file defines it. Now let’s see what happens with __cyg_profile_func_enter and -finstrument-functions-after-inlining.
[email protected]:~/src/sandbox$ clang++-9 -fuse-ld=lld -finstrument-functions-after-inlining -Wl,-y -Wl,__cyg_profile_func_enter instr.cpp /tmp/instr-8157b3.o: reference to __cyg_profile_func_enter /lib/x86_64-linux-gnu/ shared definition of __cyg_profile_func_enter
Now, we see that libc provides the definition, and our object file references it. Linking works a bit differently on Unix-y platforms than it does on Windows, but basically this means that if we define this function ourselves in our cpp file, the linker will just automatically prefer it over the shared library version. Working godbolt link without runtime output is here. So now you can kind of see where this is going, however there are still a couple of problems left to solve.
  1. We don’t want to do this for a full run of the program. We want to stop as soon as we reach main.
  2. We need a way to symbolize this trace.
The first problem is easy to solve. All we need to do is compare the address of the function being called to the address of main, and set a flag indicating we should stop tracing henceforth. (Note that taking the address of main is undefined behavior[1], but for our purposes it gets the job done, and we aren’t shipping this code, so ¯\_(ツ)_/¯). The second problem probably deserves a little more discussion though.

Symbolizing the Traces

In order to symbolize these traces, we need two things. First, we need to store the trace somewhere on persistent storage. We can’t expect to symbolize in real time with any kind of reasonable performance. You can write some C code to save the trace to some magic filename, or you can do what I did and just write it to stderr (this way you can pipe stderr to some file when you run it).
Second, and perhaps more importantly, for every address we need to write out the full path to the module the address belongs to. Your program loads many shared libraries, and in order to translate an address into a symbol, we have to know which shared library or executable the address actually belongs to. In addition, we have to be careful to write out the address of the symbol in the file on disk. When your program is running, the operating system could have loaded it anywhere in memory. And if we’re going to symbolize it after the fact we need to make sure we can still reference it after the information about where it was loaded in memory is lost. The linux function dladdr() gives us both pieces of information we need. A working godbolt sample with the exact implementation of our instrumentation hooks as they appear in our codebase can be found here.

Putting it All Together

Now that we have a file in this format saved on disk, all we need to do is symbolize the addresses. addr2line is one option, but I went with llvm-symbolizer as I find it more robust. I wrote a Python script to parse the file and symbolize each address, then print it in the same “visual” hierarchical format that the original output file is in. There are various options for filtering the resulting symbol list so that you can clean up the output to include only things that are interesting for your case. For example, I filtered out any globals that have boost:: in their name, because I can’t exactly go rewrite boost to not use global variables.
The script isn’t as simple as you would think, because simply crawling each line and symbolizing it would be unacceptably slow (when I tried this, it took over 2 hours before I finally killed the process). This is because the same address might appear thousands of times, and there’s no reason to run llvm-symbolizer against the same address multiple times. So there’s a lot of smarts in there to pre-process the address list and eliminate duplicates. I won’t discuss the implementation in more detail because it isn’t super interesting. But I’ll do even better and provide the source!
So after all of this, we can run any one of our internal targets to get the call tree, run it through the script, and then get output like this (actual output from a Roblox process, source file information removed):
excluded_symbols = [‘.\boost.*’]* excluded_modules = [‘/usr.\’]* /uslib/x86_64-linux-gnu/ 140 unique addresses InterestingRobloxProcess: 38928 unique addresses /uslib/x86_64-linux-gnu/ 1 unique addresses /uslib/x86_64-linux-gnu/ 3 unique addresses Printing call tree with depth 2 for 29276 global variables. __cxx_global_var_init.5 (InterestingFile1.cpp:418:22) RBX::InterestingRobloxClass2::InterestingRobloxClass2() (InterestingFile2.cpp.:415:0) __cxx_global_var_init.19 (InterestingFile2.cpp:183:34) (anonymous namespace)::InterestingRobloxClass2::InterestingRobloxClass2() (InterestingFile2.cpp:171:0) __cxx_global_var_init.274 (InterestingFile3.cpp:2364:33) RBX::InterestingRobloxClass3::InterestingRobloxClass3()
So there you have it: the first half of the battle is over. I can run this script on every platform, compare results to understand what order our globals are actually initialized in in practice, then slowly migrate this code out of global initializers and into main where it can be deterministic and explicit.

Future Work

It occurred to me sometime after implementing this that we could make a general purpose profiling hook that exposed some public symbols (dllexport’ed if you speak Windows), and allowed a plugin module to hook into this dynamically. This plugin module could filter addresses using whatever arbitrary logic that it was interested in. One interesting use case I came up for this is that it could look up the debug information, check if the current address maps to the constructor of a function local static, and write out the address if so. This effectively allows us to gain a deeper understanding of the order in which our lazy statics are initialized. The possibilities are endless here.

Further Reading

If you’re interested in this kind of thing, I’ve collected a couple of my favorite references for this kind of topic.
  1. Various: The C++ Language Standard
  2. Matt Godbolt: The Bits Between the Bits: How We Get to main()
  3. Ryan O’Neill: Learning Linux Binary Analysis
  4. Linkers and Loaders: John R. Levine
Neither Roblox Corporation nor this blog endorses or supports any company or service. Also, no guarantees or promises are made regarding the accuracy, reliability or completeness of the information contained in this blog.
submitted by jaydenweez to u/jaydenweez [link] [comments]

Linux/Unix for beginners. tutorial 1 (cont 1)

If you find this helpful. Please kindly upvote and follow to keep you updated on the next tutorials
In this tutorial will introduce the Linux OS and compare it with Windows.
Windows Vs. Linux: File System
Linux Types of Files
Windows Vs. Linux: Users
Windows Vs. Linux: File Name Convention
Windows Vs. Linux: HOME Directory
Windows Vs. Linux: Other Directories
Windows Vs. Linux: Key Differences
Windows Vs. Linux File System
In Microsoft Windows, files are stored in folders on different data drives like C: D: E:
But, in Linux, files are ordered in a tree structure starting with the root directory.
This root directory can be considered as the start of the file system, and it further branches out various other subdirectories. The root is denoted with a forward slash '/'.
A general tree file system on your UNIX may look like this.

Types of Files
In Linux and UNIX, everything is a file. Directories are files, files are files, and devices like Printer, mouse, keyboard etc.are files.
Let's look into the File types in more detail.
General Files
General Files also called as Ordinary files. They can contain image, video, program or simply text. They can be in ASCII or a Binary format. These are the most commonly used files by Linux Users.
Directory Files
These files are a warehouse for other file types. You can have a directory file within a directory (sub-directory).You can take them as 'Folders' found in Windows operating system.
Device Files:
In MS Windows, devices like Printers, CD-ROM, and hard drives are represented as drive letters like G: H:. In Linux, there are represented as files.For example, if the first SATA hard drive had three primary partitions, they would be named and numbered as /dev/sda1, /dev/sda2 and /dev/sda3.
Note: All device files reside in the directory /dev/
All the above file types (including devices) have permissions, which allow a user to read, edit or execute (run) them. This is a powerful Linux/Unix feature. Access restrictions can be applied for different kinds of users, by changing permissions.
Windows Vs. Linux: Users
There are 3 types of users in Linux.
Regular User
A regular user account is created for you when you install Ubuntu on your system. All your files and folders are stored in /home/ which is your home directory. As a regular user, you do not have access to directories of other users.
Root User
Other than your regular account another user account called root is created at the time of installation. The root account is a superuser who can access restricted files, install software and has administrative privileges. Whenever you want to install software, make changes to system files or perform any administrative task on Linux; you need to log in as a root user. Otherwise, for general tasks like playing music and browsing the internet, you can use your regular account.
Service user
Linux is widely used as a Server Operating System. Services such as Apache, Squid, email, etc. have their own individual service accounts. Having service accounts increases the security of your computer. Linux can allow or deny access to various resources depending on the service.
You will not see service accounts in Ubuntu Desktop version.
Regular accounts are called standard accounts in Ubuntu Desktop
In Windows, there are 4 types of user account types.
Windows Vs. Linux: File Name Convention
In Windows, you cannot have 2 files with the same name in the same folder. See below -

While in Linux, you can have 2 files with the same name in the same directory, provided they use different cases.

Windows Vs. Linux: HOME Directory
For every user in Linux, a directory is created as /home/
Consider, a regular user account "Tom". He can store his personal files and directories in the directory "/home/tom". He can't save files outside his user directory and does not have access to directories of other users. For instance, he cannot access directory "/home/jerry" of another user account"Jerry".
The concept is similar to C:\Documents and Settings in Windows.
When you boot the Linux operating system, your user directory (from the above example /home/tom) is the default working directory. Hence the directory "/home/tom is also called the Home directory which is a misnomer.
The working directory can be changed using some commands which we will learn later.
Windows Vs. Linux: Other Directories
In Windows, System and Program files are usually saved in C: drive. But, in Linux, you would find the system and program files in different directories. For example, the boot files are stored in the /boot directory, and program and software files can be found under /bin, device files in /dev. Below are important Linux Directories and a short description of what they contain.

These are most striking differences between Linux and other Operating Systems. There are more variations you will observe when switching to Linux and we will discuss them as we move along in our tutorials.
Windows Vs. Linux:
WindowsLinuxWindows uses different data drives like C: D: E to stored files and folders.Unix/Linux uses a tree like a hierarchical file system.Windows has different drives like C: D: EThere are no drives in LinuxHard drives, CD-ROMs, printers are considered as devicesPeripherals like hard drives, CD-ROMs, printers are also considered files in Linux/UnixThere are 4 types of user account types 1) Administrator, 2) Standard, 3) Child, 4) GuestThere are 3 types of user account types 1) Regular, 2) Root and 3) Service AccountAdministrator user has all administrative privileges of computers.Root user is the super user and has all administrative privileges.In Windows, you cannot have 2 files with the same name in the same folderLinux file naming convention is case sensitive. Thus, sample and SAMPLE are 2 different files in Linux/Unix operating system.In windows, My Documents is default home directory.For every user /home/username directory is created which is called his home directory.
Linux is an open source operating system so user can change source code as per requirement whereas Windows OS is a commercial operating system so user doesn’t have access to source code.
Linux is very well secure as it is easy to detect bugs and fix whereas Windows has a huge user base, so it becomes a target of hackers to attack windows system.
Linux runs faster even with older hardware whereas windows are slower compared to Linux.
Linux peripherals like hard drives, CD-ROMs, printers are considered files whereas Windows, hard drives, CD-ROMs, printers are considered as devices
Linux files are ordered in a tree structure starting with the root directory whereas in Windows, files are stored in folders on different data drives like C: D: E:
In Linux you can have 2 files with the same name in the same directory while in Windows, you cannot have 2 files with the same name in the same folder.
In Linux you would find the system and program files in different directories whereas in Windows, system and program files are usually saved in C: drive.
Linux Command Line Tutorial: Manipulate Terminal with CD Commands
The most frequent tasks that you perform on your PC is creating, moving or deleting Files. Let's look at various options for File Management.
To manage your files, you can either use
Terminal (Command Line Interface - CLI)
File manager (Graphical User Interface -GUI)
In this tutorial, you will learn-
Why learn Command Line Interface?
Launching the CLI on Ubuntu
Present working Directory (pwd)
Changing Directories (cd)
Navigating to home directory (cd ~)
Moving to root directory (cd /)
Navigating through multiple directories
Moving up one directory level (cd ..)
Relative and Absolute Paths
Click here if the video is not accessible
Why learn Command Line Interface?
Even though the world is moving to GUI based systems, CLI has its specific uses and is widely used in scripting and server administration. Let's look at it some compelling uses -
Comparatively, Commands offer more options & are flexible. Piping and stdin/stdout are immensely powerful are not available in GUI
Some configurations in GUI are up to 5 screens deep while in a CLI it's just a single command
Moving, renaming 1000's of the file in GUI will be time-consuming (Using Control /Shift to select multiple files), while in CLI, using regular expressions so can do the same task with a single command.
CLI load fast and do not consume RAM compared to GUI. In crunch scenarios this matters.
Both GUI and CLI have their specific uses. For example, in GUI, performance monitoring graphs give instant visual feedback on system health, while seeing hundreds of lines of logs in CLI is an eyesore.
You must learn to use both GUI(File Manager) and CLI (Terminal)
GUI of a Linux based OS is similar to any other OS. Hence, we will focus on CLI and learn some useful commands.
Launching the CLI on Ubuntu
There are 2 ways to launch the terminal.
1) Go to the Dash and type terminal

2) Or you can press CTRL + Alt + T to launch the Terminal
Once you launch the CLI (Terminal), you would find something as [email protected](see image) written on it.

1) The first part of this line is the name of the user (bob, tom, ubuntu, home...)
2) The second part is the computer name or the host name. The hostname helps identify a computer over the network. In a server environment, host-name becomes important.
3) The ':' is a simple separator
4) The tilde '~' sign shows that the user in working in the home directory. If you change the directory, this sign will vanish.

In the above illustration, we have moved from the /home directory to /bin using the 'cd' command. The ~ sign does not display while working in /bin directory. It appears while moving back to the home directory.
5) The '$' sign suggests that you are working as a regular user in Linux. While working as a root user, '#' is displayed.

Present Working Directory
The directory that you are currently browsing is called the Present working directory. You log on to the home directory when you boot your PC. If you want to determine the directory you are presently working on, use the command -

pwd command stands for print working directory
Above figure shows that /home/guru99 is the directory we are currently working on.
Changing Directories
If you want to change your current directory use the 'cd' command.
cd /tem
Consider the following example.

Here, we moved from directory /tmp to /bin to /usr and then back to /tmp.
Navigating to home directory
If you want to navigate to the home directory, then type cd.

You can also use the cd ~ command.

cd ~
Moving to root directory
The root of the file system in Linux is denoted by '/'. Similar to 'c:\' in Windows.
Note: In Windows, you use backward slash "\" while in UNIX/Linux, forward slash is used "/"
Type 'cd /' to move to the root directory.
cd /

TIP: Do not forget space between cd and /. Otherwise, you will get an error.
Navigating through multiple directories
You can navigate through multiple directories at the same time by specifying its complete path.
Example: If you want to move the /cpu directory under /dev, we do not need to break this operation in two parts.
Instead, we can type '/dev/cpu' to reach the directory directly.
cd /dev/cpu

Moving up one directory level
For navigating up one directory level, try.
cd ..

Here by using the 'cd ..' command, we have moved up one directory from '/dev/cpu' to '/dev'.
Then by again using the same command, we have jumped from '/dev' to '/' root directory.
Relative and Absolute Paths
A path in computing is the address of a file or folder.
Example - In Windows
C:\documentsandsettings\user\downloadsIn Linux/home/usedownloads
There are two kinds of paths:
  1. Absolute Path:
Let's say you have to browse the images stored in the Pictures directory of the home folder 'guru99'.
The absolute file path of Pictures directory /home/guru99/Pictures
To navigate to this directory, you can use the command.
cd /home/guru99/Pictures

This is called absolute path as you are specifying the full path to reach the file.
  1. Relative Path:
The Relative path comes in handy when you have to browse another subdirectory within a given directory.
It saves you from the effort to type complete paths all the time.
Suppose you are currently in your Home directory. You want to navigate to the Downloads directory.
You do no need to type the absolute path
cd /home/guru99/Downloads

Instead, you can simply type 'cd Downloads' and you would navigate to the Downloads directory as you are already present within the '/home/guru99' directory.
cd Downloads
This way you do not have to specify the complete path to reach a specific location within the same directory in the file system.
To manage your files, you can use either the GUI(File manager) or the CLI(Terminal) in Linux. Both have its relative advantages. In the tutorial series, we will focus on the CLI aka the Terminal
You can launch the terminal from the dashboard or use the shortcut key Cntrl + Alt + T
The pwd command gives the present working directory.
You can use the cd command to change directories
Absolute path is complete address of a file or directory
Relative path is relative location of a file of directory with respect to current directory
Relative path help avoid typing complete paths all the time.
cd or cd ~
Navigate to HOME directory
cd ..
Move one level up
To change to a particular directory
cd /
Move to the root directory
If you find this helpful. Kindly upvote and follow to keep you updated on the next posts.
submitted by bogolepov to Hacking_Tutorials [link] [comments]

Ethereum on ARM. Nethermind and Hyperledger Besu Eth1.0 clients included. Prysm Eth2.0 huge improvements. Raspberry Pi 4 progress. Software updates.

Ethereum on ARM is a project that provides custom Linux images for Raspberry Pi 4 (Ethereum on ARM32 repo [1]), NanoPC-T4 [2] and RockPro64 [3] boards (Ethereum on ARM64 repo [4]) that run Geth, Parity, Nethermind [5] or Besu [6] Ethereum clients as a boot service and automatically turns these ARM devices into a full Ethereum node. The images include other components of the Ethereum ecosystem such as, Raiden, IPFS, Swarm and Vipnode as well as initial support for Eth2.0 clients.
Images take care of all the necessary steps, from setting up the environment and formatting the SSD disk to installing and running the Ethereum software as well as synchronizing the blockchain.
All you need to do is flash the MicroSD card, plug in an ethernet cable, connect the SSD disk and turn on the device.
It was about time!. We’ve been hard at work, doing lots of tests, fixing bugs and updating and including new software. This is what we’ve been up to.
Images update
Note: If you are already running an Ethereum on ARM node (Raspberry Pi 4, NanoPC-T4 or RockPro64) you can update the Ethereum software by running the following command:
sudo update-ethereum
For installing the new eth1 clients:
sudo apt-get update && sudo apt-get install nethermind hyperledger-besu
For further info regarding installation and usage please visit Ethereum on ARM32 Github repo [1] (Raspberry Pi 4) and Ethereum on ARM64 Github [4] (NanoPC-T4 and RockPro64)
SHA256 605f1a4f4a9da7d54fc0256c3a4e3dfed1780b74973735fca5240812f1ede3ea
SHA256 e67fdc743b33a4b397a55d721fcd35fc3541a8f26bd006d2461c035c2e46fe97
SHA256 9d75dc71aba8cd0b8c6b4f02408f416a77e8e6459aedc70f617a83a5070f17b5
Software updates
New software included
Ethereum 1.0
The Nethermind client is finally included and these is great news for the eth1 client ecosystem. On one hand it took a while, mainly for two reasons. .NET support for ARM [7] is quite recent and, on the other hand, getting a self contained binary for ARM is not an easy task (although Microsoft has a nice cross-compilation tool set). Besides, Nethermind has some native dependencies and it took some time to figure out how .NET handles this and how to put all config and system files together (by the way, thank you very much to the Nethermind team for their great support,).
Nethermind is a great option for running an ETH1 node. .NET performs quite well and synchronization time is fantastic.
Keep in mind that Nethermind doesn’t download receipts and bodies by default, this is why the sync time is so fast. You can change this behaviour by editing mainnet.cfg file (see below).
As always, you need to enable the service and disable the other ETH1 clients. For instance, if you are running Geth:
sudo systemctl stop geth && sudo systemctl disable geth 
sudo systemctl enable nethermind && sudo systemctl start nethermind
You can tweak the client parameters here (currently only mainnet.cfg is supported)
Systemd parameters:
As always, output is redirected to syslog.
tail -f /valog/syslog
ARM32 version has some problems, though. There are lots of crashes because of memory problems (as well as the other clients). This is certainly related to the ongoing “allocation memory bug” [8]. See “Raspberry Pi 4” section for further info. Feedback is appreciated.
Besu is an enterprise-grade Java-based Ethereum client developed by Pegasys [6]. Thank you very much to Felipe Faraggi for reaching out and give us further information about it.
Besu is now included in Ethereum on ARM (64-bits only) and you can run it as a systemd service (please see Nethermind instructions above).
sudo systemctl stop geth && sudo systemctl disable geth sudo systemctl enable besu && sudo systemctl start besu 
It runs fine on NanoPC-T4 but needs more testing (particularly on the memory side). Please, give it a try and report your feedback to us. We will post more info soon, including full sync data.
Ethereum 2.0
Prysmatic Labs put a lot of work on their Prysm ETH2 client and the changes / improvements are impressive [9]. Additionally, they took ARM support very seriously from the beginning and are now releasing official binaries for ARM64. Thank you very much to the team!
We are getting 4-6 blocks/second (compared to 0.1/0.2 of 0.3.1 version.) This is a huge improvement and allows a NanoPC-T4 to sync the beacon chain in less than a day (23 hours).
To start syncing the beacon chain just start the service by running (again, stop and disable other clients):
sudo systemctl start prysm-beacon
If you want to be a validator, please, follow their instructions [10]. You can run the validator binary to do so.
Rockchip boards run on a legacy 4.4 Linux kernel and that means that it’s missing lots of improvements from the mainline branch, particularly on the storage side. We tried 5.4 and 5.5 mainline versions but it still needs some work, we will keep an eye on it [11].
On the other hand, there is an issue with log rotate (this is not a bug). We noticed that, if you don’t change the root password, cron jobs don’t work and, among other things, logrotate doesn’t truncate syslog and the it gets full. So, in order to avoid this, you need to login twice to change both passwords. First, as ethereum user (default password: ethereum) and second as root user (default password: 1234).
We’ve been experiencing memory limitations on the Raspberry Pi 4 for quite a while now, mainly caused by the 32-bit OS [8]. While the Raspbian kernel is already using a 64bit kernel, the userland is still on 32bit so, in order to mitigate these problems as much as possible, we’ve ported the Armbian virtual RAM system to the Rpi4 [12] that leverages the ZRAM kernel module to improve memory performance and, additionally, raised the swap file to 6GB. All in all, eventual crashes may happen so take this into account.
At the same time, we are looking for alternatives to set up a full 64bit image. Firstly, to get rid of the memory problems and, secondly to allow the Raspberry Pi 4 to run Eth2 clients (currently Prysm and Lighthouse). We are looking into these 2 options:
Official Ubuntu Server image [13]: We tried the official Ubuntu Server 18.04.4 that includes the 5.4 mainline kernel. The good news here is that we haven't been able to reproduce the allocation memory problem. The bad news is that the disk performance is painfully slow so this doesn’t seem an option right now.
Unofficial Ubuntu Server image [14]: As described on a recent post [15], this images is a pure 64-bit OS but uses some Raspbian parts, including the 64-bit kernel and firmware. We will try the new image soon and post the results here.
We set up a Gitcoin Grant for the project. If you appreciate our work and want to support the project, please make a donation. Remember that in Gitcon CLR rounds even 1$ can make the difference!. Thank you in advance.
Last but not least, we setup a twitter account (since January) where we try to send info on our progress, follow us or reach us on
PS. Be careful and stay safe!
submitted by diglos76 to ethereum [link] [comments]

Stack + Arch + Static linking

I'm trying to build statically linked (bundled) binaries in Arch linux. Currently when I build binaries and run file test-exe I get this:
test-exe: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/, for GNU/Linux 3.2.0, BuildID[sha1]=3eb9ed66faf9727d42dc6707d0e8e8d92eb4f0a2, stripped 
It seems that Stack makes dynamically linked binaries by default. I've tried several different ways to get around this and the latest thing I've done is that I have added these lines to package.yaml under executables:
ghc-options: -static cc-options: -static ld-options: -static -pthread 
And I get this error:
/usbin/ error: cannot find -lgmp 
I have installed Stack this way to keep Stack environment separate from rest of the system:
curl -sSL | sh printf 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bash_profile export PATH="$HOME/.local/bin:$PATH" 
How should I proceed so that I wouldn't "blow things up"? To be honest I'm quite confused of all the different possibilities how I could/should setup Stack with Arch (Arch seems to be completely different animal compared to Ubuntu for example) so I didn't dare to proceed any further before asking from here first. If someone already has a working solution, help would be very much appreciated!
EDIT: P.S. It makes things even more confusing that Arch wiki ( says:
"You can also use Stack as an alternative build tool for Haskell packages, which will link statically by default."
submitted by light3rn to haskell [link] [comments] -- Haskell Web Development using Miso in Production

For a brief intro to the product. Polimorphic is a personalized political information platform that makes it incredibly easy to track and connect with your politicians and key issues. I'd love for you to sign up: you'd get a personalized news feed tailored to your representatives and interests, as well as a daily/weekly email digest telling you what your politicians are up to on the topics you care about.
Polimorphic's codebase is written in Haskell. We have found Haskell to be a great pleasure to work with and thought it would be worthwhile to do a technical writeup for this sub. There are a few different key packages that make up the project:
The database layer:
Uses Persistent and Esqueleto to define everything database related.
Data..Types: Domain specific types with PersistField instances to store in columns
Data.Internal: Large QuasiQuote laying out the whole Database
Data.: Re-exports the fields and types that should be used outside of this package
Data..Utils: Various utilities and integrity checkers
Data: Re-exports everything except the utilities
Data.Utils: Re-exports all the utilities
We are generally happy with this setup. Persistent and Esqueleto are great libraries. My biggest complaint would be the Entity-type, although that's more the fault of Haskell the language not supporting extensible records. The Entity approach involves a fair amount of boilerplate, and doesn't allow for DB defaults that aren't specified on the Haskell side, you must always fill out every field before you send the model to the database to be inserted. It would also be nice to have indices managed by Persistent instead of a separate .sql file.
It has been easy to extend Persistent/Esqueleto for things like postgis, see here and here, and having the full power of SQL available rather than anything overly ORM-y has been very nice.
For migrations we try to use Persistent's auto-migrations when possible, and when that fails we write and commit a sql script, and then delete it once we are done with it.
The web layer:
This is the main focus of this post. Our web layer is written using GHCJS and Miso, and we have found it to be an absolutely fantastic experience, even my new to Haskell cofounder can corroborate that it is better than any JS frontend framework he has used in the past.
I will go into fair amount of detail on our structure, as we have found it to be very modular and to scale very well as the codebase grows (20k-ish LOC for web).
Web.Components..State: Various types relevant to the component, including three important types: the State that contains all the state "owned" by that component, the Action that is a big sum type of all possible actions that the component can create, and the Output which is specifically for Actions that a parent component should handle (logging in a user or changing the URI).
Web.Components..View: The view function for that component, with a type like Extra -> State -> View Action, that converts the state of that component + extra info from the parent into the appropriate HTML, which fires back Action's specific to that component.
Web.Components..Handler: The handler function for that component, with a type like Extra -> Action -> State -> Effect Action Output, which parent components should call passing in the current component state and the received action, and it will receive further actions it should send right back to the handler, as well as outputs it must deal with itself. There are some commonly seen actions including Load which initializes the component state as needed via api calls, and Modify which takes in a state modifying function and tells the parent to modify its state.
Web.Components..Database: All the DB functions needed for the component, with types like MonadIO m => Foo -> ReaderT SqlBackend m Bar.
Web.Components..Load: Contains the load function of type Maybe UserId -> State -> ReaderT SqlBackend Handler State which essentially replaces the Load action from above in order to do server side rendering for SEO/UX on initial page load. All subsequent links are handled client side and involve the Load action instead. Calls into Database module.
Web.Components..Rpc.Api: Contains the Servant API for all communication needed between the server and the client by this component.
Web.Components..Rpc.Server: Contains the Servant Server that matches the above Api for the backend to run. Calls into Database module.
Web.Components..Rpc.Client: Contains the auto-generated client functions via servant-client-ghcjs.
Web.Components..Meta: Functions for converting from the Component's state to metadata for og-meta tags like the title / image / description.
Most of the above also have a top level equivalent, as you can think of the top level app as just another component.
Web.Urls: All the public URLs that you might want to link to in Servant form, actually in a separate package so that various other projects can use them, this is separate from the above Rpc Servant Api.
Web.Router: router that effectively has type RouteT Urls (FooBarBaz -> State), although it has been convoluted somewhat to work with Servant, since RouteT cannot easily be mapped over. FooBarBaz is things like info about the current logged in user. The client copies it over from the existing state, and the server generates it via cookies + DB querying.
Web.Server: Runnable application that adds things like logging and everything that goes in and then calls into the above router and view to create a Servant/Warp server that runs everything.
Web.Client: Runnable GHCJS application that boots Miso and hooks it up to the server side rendered content.
The key aspect of making this as modular and scalable as possible is making components interact with each other as nicely as possible. The general idea with that is that the parent state stores the state of its children, and the parent view calls the child views passing in their state + any needed extra info. One non-trivial aspect is that the parent's Action type actually contains the child Action types in the sum, e.g data Action = Foo | Bar | ChildAction Child.Action. Then you can do the following output <- first ChildAction $ Child.handler (state ^. childState) and do what you want with the output, and the actions created by the child won't be lost.
Overall we have been extremely happy with our Miso-based web setup. The performance is pretty darn solid, the GHCJS binary size is not ideal but it's not too problematic either. When compared with JS libraries we of course have all the huge advantages of Haskell as a whole, types and expressiveness and so on. The feature-set also contains basically everything we have needed, from server side rendering to client side URL handling, with minimal FFI for interacting with the occasional 3rd party library.
The miners/cli/emailer:
These I would say don't need as much detail, as the structuring aspect itself is somewhat simpler due to them being single commands that you run at will or call as scheduled tasks.
The miners use scalpel, aeson and servant client to query various government sources and store it in the db.
The cli provides various convenience functions like Persistents migration-printing / executing as well as the integrity checker that checks invariants not enforced by postgresql such as any data that has been denormalized for perf reasons, or just things that are hard to model in SQL.
The emailer uses mime-mail and HaskellNet to send emails, the HTML for the emails is generated using lucid.
Other stuff:
We use nix + cabal to develop and deploy everything, using a mixture of MacOS and Ubuntu for development, and Ubuntu for deployment. For production we just use nix-build, but for development speed we use nix-shell + cabal new-build. We use reflex-platform to set all this nix stuff up.
Generally developing in Haskell has been fantastic. On-boarding has honestly not been an issue, as most of the code is in various intuitive EDSL's like Miso, Esqueleto, Servant or Persistent, that you can basically figure out by looking at the exiting code. So you can become quite productive quite early on, and as you become familiar with how Haskell really works, then you can expand what parts of the code you are able to work on.
Now for biggest pain points. Interacting with large amounts of random data types is more painful than it needs to be due to lack of extensible rows/records/variants. Lots of verbose prefixes and exports, and some conversion functions that would otherwise be a lot smaller (insert into record vs copy over and rename every field). Compile time and particularly link time improvements would also help development speed, although they aren't a huge bottleneck.
We have released source code for things we think might be useful to other people here. Not planning on adding them to hackage anytime soon, but if they start getting attention that is definitely an option.
I'm happy to answer any questions regarding the company or the underlying technology. I am also open to making an example codebase / tutorial demonstrating the above architecture, although it would take a good amount of time and effort, so only if there is significant interest.
submitted by Tysonzero to haskell [link] [comments]

Help out with testing the upcoming Helium Hydra release!

General disclaimer: please use the buildbot binaries for testing purposes only!
As most of you probably already know, there will be a hard fork in September12. To ensure miners and nodes have sufficient time to upgrade, there will be new binaries soon. In addition, it'd be best if these binaries contain as little bugs as possible. Therefore, it'd be beneficial if community members help out with testing the new release. Below I'll explain how to do so.
First, select your operating system and download the binaries from the buildbot. Alternatively, you can compile master yourself by following these instructions.

Binaries by operating system

[1] Note that the specific block height is not implemented yet.
[2] For now, block height is set to 1400000.

What to test?

First of all, the upcoming release will include various sync improvements. Thus, it'd be beneficial if community members sync from scratch and try to observe whether there is any significant difference in comparison with the old binaries. A more concrete test would be to first sync from scratch with the old release, subsequently sync from scratch with the upcoming release and lastly compare the results.
Other things to test are as follows:
If you don't know how to start up testnet, it is done as follows. The following flag needs to be added to monerod at startup:
You can add the flag as follows:
On Windows make sure to launch it from the command line. Go to the folder monerod is located and make sure your cursor isn't located on any of the files. Subsequently do SHIFT + right click and it will give you an option to "Open command window here". Lastly, type the following command:
monerod --testnet
On Linux and Mac OS X you should use the terminal to launch monerod. Note that this has to be done from the directory monerod is located in. The command is as follows:
./monerod --testnet
Bear in mind that you probably have to sync testnet first. However, as the testnet's blockchain isn't that big, it shouldn't take too long.

How to report issues / bugs

Issues and bugs are reported on the Monero repository on Github. However, before creating an issue, make sure the issue is not already reported. Alternatively, you could post the issue or bug you've found in this thread and I'll make sure it ends up on Github.
EDIT 8/18/2017: Updated buildbot binaries. They should now be even with the release branch.
submitted by dEBRUYNE_1 to Monero [link] [comments]

Call for translators! Here's how to create a live-test environment for your translations on Scribus.

Hello folks!

I came here to tell you that Scribus has had quite the development between versions 1.4.x and 1.5.x.

Hopefully by the end of the year we should have the newest 1.6 stable release. But meanwhile…

I've been contributing to the Brazilian Portuguese translation of Scribus and would like to share a bit on how to translate and "live-test" your translations locally.

Scribus is an awesome piece of software that I see as having much potential. Unfortunately, while it currently has 62 localization projects, almost none are fully translated, only a few are close and the majority is still far from being done. This means there's a lot of demand for translators! And devs would really appreciate to see more people contributing.

Since several prospective translators may end up wanting to test and see how their translations look like on the Scribus interface, but wouldn't like to depend on devs updating it or are not acquainted with how to build an application with new translations, I made this small tutorial to both be useful and explanatory!

Well, to be clear, there are two ways you can run Scribus with your up-to-date translations: by building the application itself or by changing the AppImage files.

But I'll tell you how to build the application on linux, since that's what I tried and since the AppImage version is currently a little behind (1.5.4).

Since the code for Scribus is hosted on a Subversion instance, we can use svn co instead of the usual git clone. So open the terminal and run the following:
svn co svn:// 

Alternatively, if you prefer, you can clone the repository on gitlab, which is a mirror of subversion.
git clone 

It will take quite a while to download the whole source code.

After the download is done, you should have a Scribus folder inside your home folder. Inside, let's create a new folder called build, just to keep things nice and clean.
mkdir ~/Scribus/build 

That and the folder ~/Scribus/resources/translation should be the ones we'll use the most and the ones we will care about.

Let's first change directories into the build folder we just created.
cd ~/Scribus/build 

The first step in building applications from source is installing dependencies. Fortunately I already researched those for you. :)

It's not particularly hard to find out which applications are needed, but that can be explained another time! In any case…

You can install the following dependencies

On Ubuntu like so:
sudo apt install subversion g++ cmake extra-cmake-modules libpoppler-dev libpoppler-cpp-dev libpoppler-private-dev qtbase5-dev qttools5-dev libopenscenegraph-dev libgraphicsmagick-dev libcairo2-dev librevenge-dev python-all-dev libhunspell-dev libcups2-dev libboost-python-dev libpodofo-dev libcdr-dev libfreehand-dev libpagemaker-dev libmspub-dev libqxp-dev libvisio-dev libzmf-dev libgraphicsmagick++1-dev 

On openSUSE like so:
sudo zypper install subversion cmake extra-cmake-modules libqt5-qttools-devel GraphicsMagick-devel libfreehand-devel librevenge-devel libvisio-devel libqxp-devel libmspub-devel libcdr-devel libpagemaker-devel cups-devel libtiff-devel libzmf-devel libpoppler-qt5-devel libqt5-qtbase-devel libOpenSceneGraph-devel python-devel libjpeg62-devel liblcms2-devel harfbuzz-devel libopenssl-devel hunspell-devel 

On Arch like so:
sudo pacman -S subversion gcc make cmake extra-cmake-modules qt5-base qt5-tools openscenegraph python2 pkgconfig hunspell podofo boost graphicsmagick poppler librevenge harfbuzz-icu libfreehand libpagemaker libcdr libmspub libqxp libvisio libzmf 

On Solus like so:
sudo eopkg install -c system.devel 

sudo eopkg install subversion qt5-tools-devel graphicsmagick-devel openscenegraph-devel poppler-qt5-devel qt5-base-devel librevenge-devel libfreehand-devel libvisio-devel libqxp-devel libhunspell-devel libmspub-devel libcdr-devel libpagemaker-devel podofo-devel cups-devel libjpeg-turbo-devel libtiff-devel libzmf-devel libboost-devel 
(yes, for Solus it's two commands; the first adds the repo containing build dependencies, the second contains dependencies themselves)

After that, you can run this single command:
cmake -DCMAKE_INSTALL_PREFIX:PATH=~/bin/scribus .. 

Which will generate build instructions inside the build folder using the content from the upper directory (..), that is, the Scribus folder, and set it to install directly to a new ~/bin/scribus folder.

It should finish just fine. If not, do tell me so I can update this post. :D

Once finished, you no longer have to run this command ever again (for our localization purposes). You can now build and install it:
make install 

Note that the output mentions several localization files, the kind we will be using for translating, such as scribus.pt_BR.ts

Shouldn't take too long.

Now, the localized files that were previously .ts now have a .qm extension and should be installed in ~/bin/scribus/resources/translations/. Sadly, we can't edit/translate these kinds of files.

Anyway, now we have the latest version of Scribus 1.5.x installed! Good job.

Just one last command is needed.
sudo ln -s ~/bin/scribus/bin/scribus /uslocal/bin/scribus-trans 

This command symlinks the scribus binary you just installed to the usual path where local executables are found, and names it scribus-trans. This means you can run it from the terminal by writing scribus-trans or through your menu. You can name it however you want, but if you name it differently you can have multiple versions on your system! Namely, you can install scribus, scribus-ng, scribus-trunk and your built scribus-trans. If it's going to be the only version in your system, you can simply call it scribus, or if you want to keep two versions, you can call it scribus-trunk.

This is useful in case you want to compare features or translations between the 1.4.x and 1.5.x versions.

Now to translate: you have two options, either you join the localization team in the very popular, easy to use and absolutely proprietary™ platform for localization, Transifex, or you can translate the localization files themselves.

On Transifex, after changing any string, you should be able to go to your Dashboard, visit the section for your language pair and click on Resources. Download the file for use, rename it according to the scheme in ~/Scribus/resources/translation and replace it!

Similarly, you can translate the files locally by simply editing the aforementioned localization files (the ones with .ts extension under scribus/resources/translations) on a text editor or opening them in something like QtLinguist (which should be provided by the package qt5tools that you already have installed by now).

I'd suggest checking how your translation looks in Scribus every 1000 or 2000 words. This way, you can always keep in mind more or less where each part you translated is and whether the language you used is consistent, while also not interrupting your workflow much. But it's just a suggestion!

After that, simply come back to your build folder and type the command make install again!

You don't need to run any of the other previous commands again; if you need to update your translation, simply replace the .ts file, go to your build folder and make install! Simple!

This will change your current local Scribus to include the new translation. It should give you a better idea on whether your translation fits well with the interface or not, if it's consistent, if there were typos or mistakes regarding variables, or if the interface reads fluently like one would expect from a professional desktop publishing application.

That's it. There isn't that much to it aside from the initial setup. Now this should be a suitable setting for you to live-test your translations easily! I hope this little tutorial has been helpful!

For more information, please refer to the current scribus wiki or contact the devs over at #scribus on IRC, they're very friendly. :)
submitted by LinuxFurryTranslator to scribus [link] [comments]

Request for community assistance in distro/strata acquisition strategies

The high-level goal for the next release is to make Bedrock Linux easier to try out. There are two broad steps for this:
That latter item, sadly, cannot be done in a generalized fashion. We'll need some logic for each distro (or possibly family of related distros) we're interested in. This adds up to a lot of time consuming work. Luckily, this work is easily parallelizable across different people! Instead of further delaying the next release waiting for me to read up on a bunch of distros I don't know, or limiting the usefulness of the next release by skipping supporting them, I thought it best to reach out for others for help here. Odds are good ya'll know some distros better than I do.
Here's what I'm looking for:
  1. Some way to check if the distro supports the current machine's architecture (e.g. x86_64)
    • Presumably compare the supported options against uname -m, maybe after mapping it if it's in another format.
  2. Some way to list a distro's available releases, if that makes sense for the given distro.
    • If there's a way to filter it down to only currently supported releases, that would be ideal.
    • If the release has a number of names/aliases, all of them would be of value. This way a user can specify the name in any format and we'll grab it.
  3. Some way to indicate which release should be considered the default selected one if none is specified, if that makes sense for the given distro.
  4. Some way to get a list of supported mirrors.
  5. Given a distro, release, and mirror, some way to get the distro's "base" files into a specified directory.
  6. Whatever steps are necessary to set up the previously selected mirror for the package manager, if that makes sense for the distro.
  7. Whatever steps are necessary to update/upgrade the now on-disk files, in case the above step grabbed files which need updates.
  8. Whatever steps are necessary to set up the distro's locales, given the current locale
  9. Any tweaks needed to make it work well with Bedrock Linux.
What makes this tricky are some constraints we'll need to use:
Some quick and dirty examples:
Arch Linux:
  1. Arch Linux only supports x86_64.
  2. Rolling release, no need to list releases.
  3. Rolling release, no need to determine default release.
  4. The official mirrors are available at which can be trivially downloaded and parsed
  5. Use a bootstrap tarball provided in the various mirrors to set up an environment for pacstrap, then use pacstrap to acquire the files
    • Given a mirror, we can find an HTML index page at $MIRROiso/latest/ which contains a file in the form archlinux-bootstrap--x86_64.tar.gz. We can download and untar this to some temporary location
    • Add the mirror to the temp location's /etc/pacman.d/mirrorlist
    • chroot to the temp location and run /usbin/pacman-key --init && /usbin/pacman-key --populate archlinux.
    • chroot to the temp location and run pacstrap
    • kill the gpg-agent the above steps spawn and remove temp location.
    • chroot to the stratum and run /usbin/pacman-key --init && /usbin/pacman-key --populate archlinux.
    • kill the gpg-agent the above step spawns
  6. Add the mirror to the stratum's /etc/pacman.d/mirrorlist
  7. pacman -Syu
  8. Append locale to stratum's /etc/locale.gen and run locale-gen.
  9. Comment out Checkspace from /etc/pacman.conf, as Bedrock Linux's bind mounts confuse it. Include a comment explaining this in case users read that config.
  1. Parse, map to uname -m values, compare against uname -m.
  2. Given a mirror, look at:
    • The codename and version fields in /dists/oldstable/Release
    • The codename and version fields in /dists/stable/Release
    • The codename and version fields in /dists/testing/Release
    • Unstable/Sid, no version number.
  3. Default release is stable from above.
  4. Parse
  5. Use busybox utilities to download the package list and calculate packages needed to run debootstrap. Download those, extract them, then use those to run debootstrap.
    • Download /dists//main/binary-/Packages.gz
    • Parse Packages.gz for debootstrap's dependencies.
      • Packages.gz is a relatively simple format. This is doable, if slow, in busybox shell/awk.
    • wget the dependencies from the mirror and extract them to temp location
      • Busybox can extract .deb files.
    • chroot to temp and debootstrap stratum
  6. Add lines to /etc/apt/sources.list as needed
  7. apt update && apt upgrade
  8. Install locales-all.
  9. None needed.
Ubuntu and Devuan will likely be very similar, but they'll need some specifics. Ubuntu won't have oldstable/stable/testing/sid, for example, and they'll both need different mirrors.
Void Linux:
  1. Download index page from mirror then look at filenames, compare against uname -m.
  2. Rolling release, no need to list releases.
  3. Rolling release, no need to determine default release.
  4. Parse
  5. Get static build of xbps package manager from special mirror. Use to bootstrap stratum.
  6. Not needed
  7. xbps-install -Syu
  8. Write locale to stratum's /etc/default/libc-locales and run xbps-reconfigure -f glibc-locales
  9. None needed.
I'm thinking of making void-musl a separate "distro" from void for the purposes of the UI here, unless someone has a better idea. It'll be almost identical under-the-hood, just it'll look at a slightly different mirror location.
One way to go about researching this is to look for instructions on setting up the distro in a chroot, or to bootstrap the distro. Many distros have documentation like this or this.
Don't feel obligated to actually fully script something up for these. Some of that effort may go to waste if someone comes up with another strategy, or if some code could be shared across multiple strata. Just enough for someone else to write up such a script should suffice for now. It would be good if you tried to follow the steps you're describing manually, though, just to make sure they do actually work and you're not missing something.
In addition to coming up with these items for distros I haven't covered and improving strategies for distros we already have, there's value in thinking of other things which could be useful that we might need per distro. Is there anything I'm forgetting which should be added to the per-distro list of information we need?
I know a lot of people have said they would be interested in contributing, but don't know enough low-level Linux nitty-gritty to code something up. This may be a good way to contribute that might be more accessible.
submitted by ParadigmComplex to bedrocklinux [link] [comments]

How to compile rebase aeond/aeon-wallet-cli for Android

Someone asked in a seperate thread how I compile aeond for my Android Samsung Galaxy S6.
I replied there but thought it might be good to outline the process and provide some unofficial Android binaries if you want to test with care.
My build system is Ubuntu - if you are a Windows user you could spawn a free AWS t2.micro Ubuntu instance or VirtualBox and complete these steps on there.
Note this not to compile the GUI wallet. My use case is to have a node on a seperate device on USB power that I can call remotely.
If you know what you are doing with Docker and adb, grab the Dockerfile: , edit it as per the "Edit The Dockerfile" section of this post, then follow the instructions here
If you want to skip building the binaries, grab this very unofficial zip file (25mb) that was created from writing this up: (md5sum: c2b93b71ed2bb8d02da91b296e5fb84c) and move on to the "Copy To Device And Run"
Considering it's a daemon with no GUI there's no great screenshots or videos of it in action. Best I could do:
Build Requirements
Install Docker
sudo apt-get update sudo apt-get install apt-transport-https ca-certificates curl software-properties-common curl -fsSL | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable" sudo apt-get update sudo apt-get install -y docker-ce 
Add user to docker group
sudo groupadd docker sudo usermod -aG docker $USER 
Log out and log back in (or restart)
Test docker works:
docker run hello-world 
If you get an error like this:
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///varun/docker.sock:
The docker group is not working properly, you can google for this solution or drop to root with sudo su
Install Git and adb
sudo apt install -y git adb 
Download Rebase Code
We will pull directly from stoffu 's repository:
git clone 
Then swap to the aeon-rebase-min branch
cd monero git fetch git checkout aeon-rebase-min 
Edit The Dockerfile
We need to edit the utils/build_scripts/android32.Dockerfile file to build the rebase code, not the original Monero code:
vi utils/build_scripts/android32.Dockerfile #or nano utils/build_scripts/android32.Dockerfile 
Find these lines at the bottom of the file:
RUN git clone \ && cd monero \ 
And change to:
RUN git clone \ && cd monero \ && git fetch \ && git checkout aeon-rebase-min \ 
Save the file.
Build the Binaries
This can take some time and bandwidth as docker is downloading a small version of Debian, the entire Android NDK, the rebase code and is then compiling the binaries.
cd utils/build_scripts/ && docker build -f android32.Dockerfile -t monero-android . docker create -it --name monero-android monero-android bash cd ../../ docker cp monero-android:/opt/android/monero/build/release/bin . 
Check the binaries have copied out of the docker container:
ls bin/ 
You should see
aeon-blockchain-export aeon-blockchain-import aeond aeon-wallet-cli aeon-wallet-rpc
If you have built the binaries on a remote server, download the bin folder to your local machine that has adb installed on.
Copy To Device And Run
Plug your Android device in and make sure developer mode is enabled -
With adb installed, confirm you can see your device (you may need to authorise your computer on your Android screen):
adb devices 
If the device shows and no errors are reported, copy the binaries to the sdcard:
adb push ./bin /sdcard/0/aeon/ 
Run aeond
There are different ways of running cross compiled apps directly. My phone does not let me write to the init.d files so I can't set aeond to start on startup.
adb shell su cd /sdcard/0/aeon/ ./aeond --data-dir /sdcard/0/aeon/ 
You should now have aeond running succesfully on an Android device!
You can open another adb shell, change directory back to the aeon binaries and run the aeon-wallet-cli or aeon-wallet-rpc applicataions.
File Hashes
If you downloaded the zip at the top of the post and want to compare against your own compiled binaries:
58b529a1ec805d37be5bd3d7b62e695a aeon-blockchain-export
cf778e0b79637229639b4ca317d317d6 aeon-blockchain-import
81af340f2aaf3838bbd81e8ffaf8fb01 aeond
99965faca2e0de959b4f139d843fc290 aeon-wallet-cli
b2a9eb01424f83b921594f176cdc2068 aeon-wallet-rpc
Is it on the nose to say tips are welcome? Wmt8WofgiL8gjawsk4fiMf9vKcXPhhJgHijQs3bWxjctH49VLVHjrA7iUyfYT3Q8wG7ifuaAUmftSLN2yJguQYNW1HGAbKUEf
submitted by dddanmar to Aeon [link] [comments]

Any open source software for campaign management?

I'm looking for suggestions for campaign management management -- both for the design phase (story/locations/npcs/objects, etc) and for keeping track of the campaign progress. It's something I'd have on a laptop at the table.
The only real requirements are that it: - Is entirely offline (does not require an internet connection) - Runs under linux (maintained for the latest Ubuntu LTS)     -- bonus if it's already in the repos somewhere
Ideally, it is also free and open source. I would happily pay any reasonable one-time fee, but I worry about future support and maintenance on closed-source projects.
It should also be generic, or otherwise at least work with the GURPS system.
What I do not need are: - Any map building functionality - Any virtual tabletop functionality - PC character building
Although if the tool includes these, that's fine.
I'm running the same campaign with a few small groups in parallel, so I am curious if anyone knows of decent management software to first build and then help to track of what exists in which world. I am currently using vim -- i.e., a few directories and some text files.
Any suggestions? Thanks in advance!
Edit 1: thanks to everyone for the suggestions so far. I'll come back and update this post with my experiences after I've had a chance to experiment with them.
Edit 2: I've decided to go with (the nodejs version of) TiddlyWiki or possibly CherryTree.
Edit 3: After a month of using these tools, I've ended up using CherryTree exclusively. It's perfect for my needs.
Here are my notes:
- TiddlyWiki - Offline? Check. - Flat files? Check. - It doesn't look like it stores everything in a single flat file. - Images are saved with their original names and each note has its own .tid file. - This is true at least for the nodejs version. - Requires no installation -- just do: - git clone --depth=1 '' cd TiddlyWiki5 nodejs tiddlywiki.js $HOME/Documents/mywiki --init server nodejs tiddlywiki.js $HOME/Documents/mywiki --server 9100 - The wiki formatting is kind of strange. It seems like a hybrid of markdown and trac formatting. - So, yet another markup language to try and not get confused with all of the others. - Fortunately, this is mostly a non-issue for campaign management, and there are editor shortcuts. - It does support categorization using tags, although not true hierarchy. - Now that I think about it, tags might be more appropriate for parallel branching campaign management. - Start with everything under a "common" tag, then copy to campaign tag when something changes. - CherryTree - I put this into a similar category to Zim. - Installation: "apt install cherrytree" - CherryTree is a great little tool, too! - Supports both tags AND hierarchy! - Stores everything in a single file (sqlite or xml) - This includes binary blobs, too - Zim - Zim is a great little tool! - Installation: "apt install zim" - It has hierarchy, runs locally, etc - Uses flat files - However, it does reference outside images instead of hardlinking or copying them. - Operates in WYSIWYG but has an easy option to edit source (markup) - I'm unsure if this beats CherryTree or vice-versa. - For a smaller project, I'd give CherryTree the edge, because one file ensures you've got every piece (image, etc). - But, for a big project, you don't want everything in one sqlite db. - vimwiki - I love it! I'm going to be using this a lot in the future. - However, I'm not sure that it's the right tool for campaign management. - For example, this solution lacks inline image support. - A wiki with markdown-like formatting seems like a better solution here. - Scrivener - Scrivener is what originally led me to create this post. - It seems like the right design in many ways but the lack of official, native linux support is a deal-breaker. - Even if it runs under wine today, that's only until something breaks and the company doesn't care to fix it. - Hold that thought. There's apparently a linux beta available on the forums to look into. - Scrivener also has a bunch of syncing features I couldn't care less about. - Just give me some flat files I can rsync or push to a git repo. - Laverna - I put this in a similar category to TiddlyWiki. - However it's missing some important features by comparison. - Requires no installation -- just "nodejs server.js" and you're already running at - Offline? Check. - There's also a sync "feature" which is thankfully optional. - Flat files? Check. - It seems to be essentially nothing more than some json and markdown files. Beautiful. - However, there are also some major problems - You cannot put one note under another one. - It seems there is no standard multi-level collapsible hierarchy support for organizing the notes (?). - Notes themselves aren't listed alphabetically. - There's no search support (Ctrl-f) while in edit mode - It isn't obvious where the data is actually stored, nor is that location configurable in the settings. - Images aren't stored - they just point to the filepath of the original image added. - Ideally, this should use a hardlink, or copy as a last resort. - In any case, exporting the data should include the images. - This has been a recognized issue for over a year, but hasn't been fixed. - So, given the above, I can't recommend Laverna just yet. Oh well. - Something running on a LAMP stack - Apache is okay, but requiring PHP -- let alone MySQL -- is far too much for something like this. - For a webapp, WSGI, nodejs, cherrypy (or even Django) would be more reasonable. - MediaWiki - Entirely fine, if more compact alternatives didn't exist. - Wordpress - Not really seeing Wordpress as the right UI for this. - dokuwiki - It's PHP. That also means having to install PHP. And I'd need a webserver (apache).. - However, it doesn't need an external database, which is nice. - If a webapp, then a simple tool like this should really be a lot more cohesive - TiddlyWiki meets the needs of a wiki without having so many dependencies - pmwiki - I put this in the same category as dokuwiki. It's a self-contained PHP webapp. - As far as feature set, it seems comparable to dokuwiki as well - In terms of syntax, DokuWiki feels closer to Trac and Markdown - If I need a PHP wiki at some point, I'll compare/contrast dokuwiki and pmwiki in more detail - maptool - While not generally a fan of java, this does run out-of-the-box, is cross-platform, and seems fairly lightweight. - Ultimately I'm not sure how this supports campaign management (as it's primarily for mapping) - It might have additional features I don't know about. - vim orgmode plugin - I didn't look into it because vimwiki seems to satisfy this space perfectly. - - I looked at it briefly, but it does seem like overkill. I'd stick with text files before going with something this big. - confluence - I put this in a similar category to . Team collaboration sofware is overkill for this task. - voodoopad - As far as I can tell, this is only for MacOS, so I didn't look further. 
submitted by RyanSanden to rpg [link] [comments]

#Welcome windows Refugees, welcome to GNU/Linux (an update for the sticky)

Microsoft will terminate support for Windows 10 on October 14, 2025.
Microsoft will terminate support for Windows 8 on January 10, 2023.

Microsoft will terminate support for Windows 7 on January 14, 2020.

Microsoft terminated support for Windows VISTA on April 11, 2017.
Microsoft terminated support for Windows XP on April 8, 2014.
Microsoft terminated support for Windows ME on July 11, 2006.
Microsoft terminated support for Windows 98 on July 11, 2006.
Microsoft terminated support for Windows 95 on December 31, 2001.
Microsoft terminated support for Windows 3.1 on December 31, 2001.
Microsoft terminated support for Windows NT on July 27, 2000.
What to do: Your decision, but we recommend you change your operating system to be Linux (GNU/Linux).
GNU? What is this GNU? - Linux is only the kernel, not the applications that run on it. The Kernel and GNU together are the OS. GNU is the compiler, libraries binary utilities(many of the terminal commands) and shell(BASH). Some are used in Windows and Mac. A kernel is the lowest level of software that interfaces with the hardware in your computer. It's the bridge between GNU and the hardware.
Desktop environment?? A collection of GUI applications are referred to as a desktop environment or DE. This is things like a menu, icons, toolbars wallpaper, widgets, and windows manager. Some DEs take more system resources to run Most end users don't care too much about the DE, GNU, or Kernel, they really only care about the applications like games, email, word processor etcetera. So how to get started with the migration?
The Migration.
THE BACKUP Even if you toast your machine, you will be able to recover your data. If your backup software has a "verify" feature, use it. You'll want to backup to an external device, if possible. Do NOT back up your data onto your existing C: drive, as if you somehow delete your C: drive during installation of Linux, your backup will be deleted too. Move things to an external Drive/USB stick or a cloud account (note: the Downloads, Music, My Pictures, My Videos collections sub directories may be VERY large). What to back up? Well you aren't going to be able to run windows programs on Linux (well you can but that's another story see WINE) so there is no need to back them up, but you will want things like documents, pictures, movies, music and things of that nature. Unfortunately some of these can be hard to find in Windows. Things like emails, browser profile/bookmarks.
  • Things on the Desktop are actually located at C:\Documents and Settings\USERNAME\Desktop or %USERPROFILE%\Desktop
  • Favorites (Internet Explorer) C:\Documents and Settings\USERNAME\Favorites or %USERPROFILE%\Favorites
  • The My Documents folder is C:\Documents and Settings\USERNAME\My Documents or %USERPROFILE%\Documents
  • Email. Microsoft likes to move these around from version to version.
  • Contacts (Outlook Express) C:\Documents and Settings\USERNAME\Application Data\Microsoft\Address Book
  • Contacts (Outlook) - Address book is contained in a PST file 2010 click the file tab>account settings>account settings> data tab>click an entry>click open folder location usually C:\users\username\AppData\Local\Microsoft\Outlook or %USERPROFILE%\AppData\Local
  • 2013/16 C:\users\username\Documents\Outlook Files
  • email (Outlook Express) C:\Documents and Settings\USERNAME\Local Settings\Application Data\Identities\XXXXX\Microsoft\Outlook Express (where XXXXX is a long string of alphanumeric characters)
  • email (Outlook 2003) C:\Documents and Settings\USERNAME\Application Data\Microsoft\Outlook
  • Getting things out of a PST file is another thing all together. A utility like readpst will be needed. For contacts or vcards importing 1 by 1 is simply enough but for bulk import you will need to open a terminal and type some commands.
    • $ cat ./* >> mycontacts.vcf
    • $ sed -i 's/VCARDBEGIN/VCARD\n\nBEGIN/g' mycontacts.vcf
    • Then import the mycontacts.vcf into the particular program you are using. Thunderbird or Claws or something else.
This is a short list for a few programs. You should make a list of the programs you use and the file types that result and confirm their location. Keep in mind some Microsoft formats are proprietary and may not be able to be transferred to another program. Some can be but sometimes the markup used is proprietary so the content of a word doc for instance may be there but the spacing or special columns might not be, or a particular font might be and a substitution might be made.
Each user on a Windows XP machine has a separate profile, these are all stored in the C:\Documents and Settings directory. Ensure to copy the data for each profile on the system that you want to create on the Linux system. Some directories (eg. Application Data) may be hidden, to browse to them, first enable "show hidden files and folders" (
Migration tips: When you're installing, try and have access to a second computer with a working internet connection. If you run into problems during the install, you can use the other computer to search for a solution.
If you encounter problems, don't forget to try any "test installation media", "test memory" and/or "test hard disk" options you may be offered on the install disc.
Using the same wallpaper on your new Linux installation might help make the transition easier psychologically.
Select a distribution CPU type: When downloading Linux, ensure to select the correct build for your CPU. Many distributions have separate downloads for 32-bit or 64-bit CPU architectures - they also may have downloads for non-X86 CPUs. If you're migrating from Windows, you'll likely want X86, 32-bit or 64-bit.
Have a look at the various Linux distributions available (there's quite a few to choose from) and make a shortlist of possibles. Many of them have a "Live CD" which is a version that runs from CD/usb stick which can be downloaded and burned. You boot off the liveCD/usb and you see whether the software works for you & your hardware, without making any changes to your existing Windows install.
Some distributions may pull from stable repositories or testing, more on this below(see Repositories). Some distros may have to reinstall the OS to upgrade to the next version where others may be rolling release. This may affect how you choose to set up home (see "Chose the location for home" below).
You can find a list of distributions in many places, including these:
The /g/ OS guide (updated to v.1.3.2)
Comparison of Linux distributions
For recommendations try the articles linked below, or just browse the sidebar. Several distributions have been specifically designed to provide a Windows-like experience, a list of these is below. You could also try the Linux Distribution Chooser (2011).
Why so many distro?? Don't think of a distro as a different Linux but instead as one linux packaged with a unique collection of software packages. Things like DEs. ONE DE might be Gnome which is similar to a MAC or Amiga in style, while another might be KDE which is similar to windows, or Unity which is like a tablet. They all use GNU and the Linux kernel however and they all pull from the same group of software repositories.
Linux comes in a lot of flavours, some are set up to be as tiny as possible and some even to run entirely from RAM. Puppy linux is one such Linux OS. Puppy now comes in a variety of flavours and is more suited to machines that windows 95 came on. Precise puppy is the more original flavour and is a mere 201 MB in size. It uses very tiny prgrams you have never heard of and takes getting used to but it's fully usable if you take the time to learn the programs. It uses seamonkey for instance as seamonkey is a browser, email client, html composer and newsgroups client all in one program(like Netscape used to be). That's part of how it stays so small, and because the entire thing is in RAM is lightning fast. There are heavier version for win 98 and ME machines like Lucid Puppy The puppy website is a horror story but you can always go straight to the forums .
Download the ISO and Burn it
If you don't have the ability to burn a Disto ISO to disc or have really slow internet you can have one sent to you by snailmail, or even pic them up at local computer shops. Otherwise you can download the iso image (some as small as 100MB someover a Gig). You will need to have a CD or DVD burner in you machine and software to run it. You can even put this ISO onto a USB device.
There are many guides out there for this.
Verify the hash of the ISO
This is to verify the download is intact
Do a test boot with a LiveCD
It's pretty simple. Insert the distro ISO medium (CD/usb) and use your BIOS UEFI selector to select that medium to boot from. Most distros have tools to test your RAM as well as booting to a version of the distro you can use to poke around and try it out.
install the new OS
This is where things get complicated. There are several things to consider first. Dual boot, location of home. Read the section below and installing will be covered more later.
Choose Dual Boot or Linux Only
Dual-boot (sometimes called multi-boot) is a good way to experiment. If you want to keep your Windows install, you can do that by using "dual boot", where you select which OS you want to use from a menu when you first power on the machine. This topic is a bit complex for this post, so we recommend making a post about it if you have queries (search the linux4noobs sub for "dual boot"). There are videos on youtube on how to dual boot. However, you will need to have sufficient disk space to hold both operating systems at once. Linux is small compared to Windows Each distro page will state it's required space. If you keep an old no longer supported version of windows you should NOT go on the internet with it as it is no longer secure!!! Do not use it for internet, email chat, etcetera, use linux for going online.
All of this assumes you are going to allow linux to replace the windows Master Boot Record with Grub2 (linux boot menu), but thre is an alternate method of dual booting keeping the windows menu and using easybcd to put in a linux option. This is the diagram in that vid Keeping the windows loader is a far more complex way to go.
Chose the location for home
First what is /home/? Home is where you store your pics, docs, movies etcetera. There are three options for home. Choose /home/ as its own partition or even it's own drive, or inside the Linux install partition.
The drawback of separating home from the linux install partition is that is a little more complex to set up. The benefit is that the Linux OS partition can be wiped out and your files on home (a separate partition/drive) are safe. Having home on it's own drive means the entire drive the OS is installed on could die and your files are safe on another drive. You just install a new drive, install an os and you are back up and running. See Partitioning further below. However the drawback of home on it's own drive is that drive can die and you lose your home files. Of course home files should always be backed up to the cloud or another drive so it should be easy to recover in the face of that kind of failure.
Chose your Apps or selecting and installing software
Linux does not natively support Windows programs, so you'll need to find a "workalike" for each Windows application you use. Some distros come with a collection of some of these on the install but they can all be installed later from the repositories or from their websites. More on what a repository is further down below.
Here are some websites that list equivalents.
The primary APPS people will be concerned about are
Windows APPS you can't just do without
You can also try Wine, which lets some Windows applications run on unix-like systems, including Linux. However this may not work for your particular needs, you'll need to test it to see. There is a compatibility list here It's also possible to "virtualize" your Windows install, using software such as VirtualBox, and run it in a window under Linux.
Running OLD DOS Apps/games
If you have DOS apps, try DOSbox or DOSEMU . There are many other emulators that will run on linux from old ARCADE MAME games to Sony playstation.
Above we mention repositories. What are they? Well with windows you can search for software on the web and download a file and extract and install it. It Linux all the software is in one place called a repository. There are many repos. Major repositories are designed to be malware free. Some with stable old stogy software that won't crash your system. Some are testing and might breakthings, and others are bleeding edge aka "unstable" and likely to break things. By break things we mean things like dependancies. One version of software might need another small piece of software to work say program called Wallpaper uses a small program called SillyScreenColours(SCC) V1, but SSC might be up to V3 already but V3 won't work for Wallpaper because it needs V1. Well in a testing repo another new program say ExtremeWallpaper might need V3 of SCC and if you install it, it will remove V1 to install V3 and now the other program Wallpaper doesn't work. That's the kind of thing we mean by break. So to keep that kind of thing from happening Linux pulls from repositories that are labelled/staged for stability. So when you want more software you open your distro's "software manager". An application that connects to the repository where you select and install software from there and it warns you of any possible problems. You can still get software from websites with Linux but installing may involve copy and pasting commands to do it or to "compile from source" to make sure all the program dependencies are met. You can sometime break things doing it that way however, or what you are trying to install won't run on your distros kernel or unique collection of software.
Software manager.
Each distro has chosen a repository and can have different software programs to install from them. Debian systems use APT where others like Fedora use RPM, or YUM on Redhat, or Pacman on Arch. These are a collection of text based commands that can be run from terminal. Most desktop distros have GUI sofware managers like Synaptic or their own custom GUI software. Mint's is called Mintinstall. Each distro has their own names for their repositories. Ubuntu has 4 repositories Main, Universe, restricted, and Multiverse as well as PPA's. Personal Package Archives.Packages in PPAs do not undergo the same process of validation as packages in the main repositories
  • Main - Canonical-supported free and open-source software. (??stable, testing, unstable??)
  • Universe - Community-maintained free and open-source software. (??stable, testing, unstable??)
  • Restricted - Proprietary drivers for devices.
  • Multiverse - Software restricted by copyright or legal issues.
You can change your system to go from Debian stable to only use testing or you can even run a mixed system pulling from stable and testing but this is more complex. Each distro will have a way to add repositories (or PPA's if ubuntu based) or change sources. On Debian based Mint to install software you would launch the software managerinput your password then either do a word search like desktop publishing, or drawing and see the matches or you can navigate categories like Games, Office, Internet. For instance Graphics then breaks down to 3D, Drawing, Photography, Publishing, Scanning, and viewers. When you find software you want to install you click on it to read it's details. For instance Scribus, a desktop page layout program, and you get more details "Scribus is an open source desktop page layout program with the aim of producing commercial grade output in PDF and Postscript. Scribus supports professional DTP features, such as CMYK color" and here you can simply click a button "Install" to install software. It's the same process to remove software. There is a toggle in menu "view" for "installed" and "Available". The same software can be installed or removed via synaptec but it's a little less graphical and more texted based but still GUI based point and click. It's a similar process in other distributions.
Actually installing
There are Two more hurdles to running linux.
UEFI & Secure boot: newer machines have a feature which can prevent non-Windows operating systems from booting. You may need to disable Secure Boot in your BIOS / UEFI if your hardware has this feature.
Drivers: This can get tricky, especially for newer, consumer-grade hardware. If you find a problem here, please make a post about it so we can assist. Using a live CD can show up problems here before you spend time on a full install. Some hardware is so new or rare there just aren't open drivers available for it and you may have to use a non open proprietary driver or change some hardware. This is mostly going to affect wifi cards and graphics cards. A lot of older hardware that won't run on win7 and up will run fine on Linux because the drivers are available and supported. There is a graphical program for adding and removing drivers, but it's best to look up the text commands when changing a graphics card driver because you may lose graphics and be reduced to a command line to enter text on to revert the change to get your graphics back if the driver you tried failed.
This is where things can get SCARY. Not really, but it can be challenging for some. What is a partition? It is simply a division of your hard drive. Think of Stark in Farscape "Your side my side, your side my side". Basically you are labeling a chunk of a hard drive space to be used for a specific purpose. A section to hold boot info, a section to use for swapping memory to hard drive, a section for windows, a section for Linux, a section for holding docs pics etcetera called HOME in linux. Home is where your user account folder will be created. You can do this partitioning in windows with it's own partitioning tool if you prefer. This is best for shrinking the windows partition because windows can have a RAID set up of can be spanning multiple hard drives and sometime windows needs to be shut down holding the shift key to make it completely release a lock on the hard drive. Or you can use a tool on the live distro called Gparted to do this. Gparted takes a little getting used to visually but does the same thing the windows tool does. The one thing it can't do is force windows to let go of the hard drive and keep the partition intact, it can forcibly wipe the partition however. You can use gparted to label partitions as "/home" where your docs go(home if not specifically designated is inside the Linux OS space), or "/" the linux OS, or "boot" where grub2 will go, or "swap", and there are multiple file system types available fat32, ntfs ext2,3,4 and more. There are dozens of videos on youtube on how to use.
Why use Gparted? Doesn't the installer re-partition? Yes it does but it may not have the options you want, there is a manual option that is gparted but sometimes it is a different GUI of gparted with fewer options or some other partition software altogether. The manual options vary from distro to distro. Some will let you share space with windows by using a slider but it gives you no options to make home a separate partition or put it on a separate drive. Others only have "take over whole disc" or "manual". It varies distro to distro. If there is a hard drive in the machine you absolutely don't want touched you should shut down and unplug the power from it. If a partition has menu items grayed out it means it is mounted and must be unmounted before operations can be performed on it. Often SWAP will have to be unmounted. The labeling of hard drives in windows is IDE0, IDE1 or HD0,1 ; HD0,2 ; HD1 etc.. In linux the nomenclature is SDA, SDB and partitions are numbered SDA1, SDA2, SDB1,SDB2,SDB3, SDC1, SDD1 etc.. So after you have decided on how to partition then decide if to use the windows tool or the liveCD automatic tool or the manual tool(or gparted). Yes as the install is running you can use the livecd software to browse the internet.
Also be aware of FAKE RAID.
Using a printer on a home network attached to and shared from a windows machine for a linux machine is fairly straight forward, but if your entire network is now all linux machines you need to know to do so(share the printer) by opening a web browser on and typing . Then clicking on the printers tab. On most linux distros this is already all set up but if it isn't or look at
This is a huge topic and really needs to be narrowed down to what you are troubleshooting.
Recommended reading:
Contributors to this doc: u/Pi31415926, PaperPlaneFlyer123, Pi31415926, provocatio, spammeaccount
submitted by spammeaccount to linux4noobs [link] [comments]

Guide for installing Slackware64-current linux on a Dell G7 Optimus with GTX 1060 and nvidia drivers


Slackware is the oldest surviving linux distro and Linus Torvalds developed it in 1992. In stock form it looks old. Thanks to the awesome slackware maintainers: alienBob, rworkman and ryanpcmquen (apologies if I missed you) whom contribute personal repositories that when combined with sbopkg and slackpkg+ (package managers) allow you to create a powerful and stable linux system that does not look or feel old at all. At first slackware will not boot to a gui, its text until prompted so remember after rebooting throughout this tutorial I may not remind you to startx. Adding a display manager will solve this if its an issue for you and we can do that once we get nvidia up and running.

Slackware is very stable and more oldschool in build philosophy than most. Slackware does not modify packages in their repos. I can install the drivr from the nvidia site no problem. Cant do that on most other distros becuase other distros change the packages in their repos.

The hardest thing about slackware is that the package managers do not solve dependencies. This can be good and bad. Good for stability as we need to look at readme files and note the dependencies when using sbopkg package manager and install those packages first. This is the beginnning of another advantage. When using sbo we will be building all packages from source. Packages built on your system usually run better than stock binaries. The bad is that it is not as easy or as fast to install a big package like say Kodi media center. In ubuntu its done in a few minutes. It may take the better part of an evening on slackware but it will all build from source if using sbopkg package manager and it will be a nice accomplishment. All in all I think its a good thing as it makes one take their time to make system changes and install packages.

Sbopkg is not needed for many popular programs out there. The awesome slackware mantainers all have their own side project repositories that we are welcome to utilize. Many of the packages are complete and do not need dependencies like handbrake however I encourage you to read the readme's on each and every package you look at for installation. That and if its not in any repo look for a slackbuild script. These are the right way to build software on slakware.

A decade ago I knew gamers running slackare for the best fps rates as it runs without many services when compared. The configuations are mostly ncurses interfaces but this is by design. Makes remote administration over ssh quite easy. I encourage one to read in to things like changing window managers via xwmconfig, setting up wireless, and so forth before making this plunge. This is not an easy operating system to master however you will know more about linux by the time this is done. I had a phone interview yesterday and it helped to say that I am about to install an encrypted lvm slackware on a new optimus laptop. Prompted many questions. This is cool to run the oldest linux distribution out there on some of the newest hardware. AND there is no systemd in slackware unles you are speaking of one spinoff that ships with gnome called dlack-gnome or similar. Stock slackware does not include systemd.

If using slackware64-current you will have a rolling release where we can build many of our packages from source. There will be no indications from the kde dock that an update is available. Manually running your slackpkg update and or reading the slackware changelog from time to time will help keep one current. One less running process is another way to look at it. When I ran ubuntu on this laptop I had over 2400 packge installed by this point. I finished this tutorial to notice that I am only with 1666 packages installed.

Here is a screenfetch after I just go the beta nvidia driver working (covered below):
didnt paste too well
::::::::::::::::::: [[email protected]](mailto:[email protected])
::::::::::::::::::::::::: OS: Slackware
::::::::cllcccccllllllll:::::: Kernel: x86_64 Linux 4.19.5
:::::::::lc dc::::::: Uptime: 18m
::::::::cl clllccllll oc::::::::: Packages: 1666
:::::::::o lc::::::::co oc:::::::::: Shell: bash 4.4.23
::::::::::o cccclc:::::clcc:::::::::::: Resolution: 1920x1080
:::::::::::lc cclccclc::::::::::::: DE: KDE 5.51.0 / Plasma 5.14.1
::::::::::::::lcclcc lc:::::::::::: WM: KWin
::::::::::cclcc:::::lccclc oc::::::::::: WM Theme: Oxygen
::::::::::o l::::::::::l lc::::::::::: GTK Theme: Breeze [GTK2/3]
:::::cll:o clcllcccll o::::::::::: Icon Theme: breeze
:::::occ:o clc::::::::::: Font: Noto Sans Regular
::::ocl:ccslclccclclccclclc::::::::::::: CPU: Intel Core i7-8750H @ 12x 4.1GHz [81.0°C]
:::oclcccccccccccccllllllllllllll::::: GPU: GeForce GTX 1060 with Max-Q Design
::lcc1lcccccccccccccccccccccccco:::: RAM: 849MiB / 15734MiB

Notice that I am on beta nvidia driver (written 12/1/2018) Latest KDE-Plasma
Until I update post tomorrow with a way to switch back to intel driver please know that this is a nvidia gpu defaulted system at this point however I hope to get this solved tomorrow by looking in to prime-select.

Here are some repos to get one looking at the community and what it available.

alienBob repo:
ryanpcmcquen: may have to google for moore of his projects

If you want to test slackware before commiting to an install feel free to try liveslack a side project of alienBob where we can put the newest slackware current on usb and enjoy the portability. I will advise that its tricky to get going on Dell G7 with optimus. One needs to escape at the grub menu and add a lot of things on the kernel boot line. I tried but did not guess well the right blacklist and modeset options. Try a different computer if possible, dell G7 is hard with any distro that defaults to nouveau.

If that went well or you are already convinced read on for instructions on making this permanent.

First, get a slackware currrent iso from here:

that's 2.8 gb worth. If you want a smaller netinstall based iso:

Second: This link will help to get it onto usb if on windows:

if on linux simply:

dd bs=4M if=slackware64-current-install-dvd.iso of=/dev/sdx x being the usb stick, use lsblk to find

Bios: only need to disable secure boot and set mode to audit. apply and exit

Pressing F12 on the dell logo ...

Choose the efi general partition 1 once the boot menu shows.

Partition Disks:


I opted for an encrypted LVM utilizing aes-xts-plain64 encryption and I filled the block devices that the encrypted containers mount to with one part /dev/urandom and one part /dev/zero as advised by an NSA poster here:

Normal slackware install docs show a different encryption method that is not as secure and for some reason when I tried to fill block devices with urandom the next step with cryptsetup always failed with error code 22 8+ hours later. So I propose we use the NSA way to fill the drives and replace the header with a urandom fill and consider the drive fairly safe if FBI wanted to mount the drive and run a statistical block comparison on the drive. Without the fill of random / zero data it would be easier to see where the encrypted container was mounted and then to try to decrypt. This way we leave more filled than not and it will be harder for forensics to determine mount points.

I also propose that we put the swap on the slow drive along with a small backup partition as well as a large data partition. On the SSD we need to leave two small partitions unencrypted for EFI and /boot as the kernel and the initrd must sit within unencrypted space. So lets get going:

With the slackware usb inserted and the above bios changes made powerup and press F12 upon seeing the Dell logo. Go through the initial questions and dont worry about wireless. Login with "root" and no password.

Lets set up the partitions:

gdisk /dev/sda
o for new partition table
Y for confirm
n for new partition
enter for the default first sector
enter for the default last sector (rest of drive)
8e00 for LVM partition type
w for write
Y for agree
done with /dev/sda1 partitions for now

gdisk /dev/sdb
o for new partition table
Y for confirm
n for new partition
enter for default first sector
+512M for 512mb
ef00 for EFI partition type
n for new partition
enter for default first sector
+512M for 512mb
enter for default partition type
n for new partition
enter for default first sector
enter for default last sector (rest of drive)
8e00 for LVM partition type
w for write
Y for confirm
done with /dev/sdb for now

Now we have LVM partition on each drive and EFI and boot outside of encryption via the 2 512mb partitions

Urandom takes forever so I am going to guide through a way to make the container, fill it with zeros, remove the mapped link, use urandom to fill the header area and then replace the contaier and mapping as an effective way to save time while filling with random data. This will take 3 hours to complete but when compared to /dev/urandom doing the same job we also save three hours.Might look odd at first but look it over and read the link where I got the idea above if needed. Its a time saver.

cryptsetup luksFormat -c aes-xts-plain64:sha512 -h sha512 -s 256 /dev/sda1
set password twice
cryptsetup luksOpen /dev/sda1 Vault1
offer password
dd if=dev/zero of=/dev/mappevault1 bs=1M (took 10825 seconds or 3 hours)
cryptsetup luksFormat -c aes-xts-plain64:sha512 -h sha512 -s 256 /dev/sdb3
cryptsetup luksOpen /dev/sdb3 Vault2
offer password
dd if=/dev/zero of=/dev/mapprt/Vault2
dmsetup remove /dev/mappeVault1
dmsetup remove /dev/mappeVault2
dd if=/dev/urandom of=/dev/sda1 bs=512 count=2056
dd if=/dev/urandom of=/dev/sdb3 bs=512 count-2056
cryptsetup luksFormat -c aes-xts-plain64:sha512 -h sha512 -s 256 /dev/sda1
password twice
cryptsetup luksFormat -c aes-xts-plain64:sha512 -h sha512 -s 256 /dev/sdb3
password twice
cryptsetup luksOpen /dev/sda1 Vault1
pvcreate /dev/mappeVault1
vgcreate 1TB /dev/mappeVault1
lvcreate -C y -L 16.01G -n swap 1TB
lvcreate -C y -L 64G -n backups 1TB
lvcreate -C y -l 100%FREE -n data 1TB
cryptsetup luksOpen /dev/sdb3 Vault2
pvcreate /dev/mappeVault2
vgcreate 128GB /dev/mappeVault2
lvcreate -C y -L 64G root 128GB
lvcrate -C y -l 100%FREE home 128GB
vgscan --mknodes
vgchange -ay
mkswap /dev/1TB/swap

enable gpm as you go through the options. keymap not needed.
install slackware in full. setup networking, skip usb, skip lilo, skip elilo, skip bootloader
complete install and exit installer and select option to go to shell.

chroot /mnt
grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=GRUB --recheck
grub-mkconfig -o /boot/grub/grub.cfg
***keep that last line handy as we need to run it if we ever update the kernel of nvidia or both
nano /boot/grub/grub.cfg
***add line below kernel line:
initrd /initrd.gz
CTL+o to save
CTL+x to exit
/usshare/mkinitrd/ -r

should look something like:

mkinitrd -c -k 4.19.5 -f ext4 -r /dev/128GB/root -m xhci-pci:ohci-pci:ehci-pci:xhci-hcd:uhci-hcd:ehci-hcd:hid:usbhid:i2c-hid:hid_generic:hid-asus:hid-cherry:hid-logitech:hid-logitech-dj:hid-logitech-hidpp:hid-lenovo:hid-microsoft:hid_multitouch:jbd2:mbcache:crc32c-intel:ext4 -C /dev/sdb3 -L -u -o /boot/initrd.gz

and its fine if you are not using a logitech mouse like I am. Yours may look different.

now copy that long two liner that just resulted or just highlight and right click. If you enabled gpm it should show up on the cursor. If not you need to type it out. If you want suspend to work then add the following on that long two liner that resulted from the

-h /dev/1tb/swap


mkinitrd -c -k 4.19.5 -f ext4 -r /dev/128GB/root -m xhci-pci:ohci-pci:ehci-pci:xhci-hcd:uhci-hcd:ehci-hcd:hid:usbhid:i2c-hid:hid_generic:hid-asus:hid-cherry:hid-logitech:hid-logitech-dj:hid-logitech-hidpp:hid-lenovo:hid-microsoft:hid_multitouch:jbd2:mbcache:crc32c-intel:ext4 -C /dev/sdb3 -L -u -o /boot/initrd.gz -h /dev/1tb/swap

and append "vt.default_utf8=0 resume=/dev/1TB/swap" to grub kernel line
nano /boot/grub/grub.cfg
navigate to the line starting with kernel and add:

vt.default_utf8=0 resume=/dev/1TB/swap
CTL+o to save
CTL+x to exit


notice that part way in to init that you are asked for a password to unlock your encrypted volume

Congratulations. You just installed slackware with encrypted lvm

Now that we have a base install done lets set it up:

open terminal
press enter only for all selections using default suggested. at part asking if you want to write to sysconfig:
nano /etc/rc.d/rc.local
modprobe coretemp
/usbin/sensors -s

CTL+o to save
CTL+x to exit


Utilizing alienBobs excellent repo for making i386 binaries work on 64 bit systems: multilib

in a nutshell:
lftp -c 'open ; mirror -c -e current' ***this can take a bit.
cd current
su and enter password to get to root
upgradepkg --reinstall --install-new *.t?z
upgradepkg --install-new slackware64-compat32/*-compat32/*.t?z

Congratulations you now have a multilib system

You may have noticed that kde looks to have some dust on it. If you would like newer kde and kde-connect to link our androids please do the following:

press CTL+ALT+BACKSPACE to kill the xserver and get to shell
if anything hangs press enter to kill that last dbus thing

su to get to root
removepkg kde
rsync -Hav --exclude=x86 rsync:// latest/
cd latest
upgradepkg --reinstall --install-new x86_64/deps/*.t?z
upgradepkg --reinstall --install-new x86_64/deps/telepathy/*.t?z
upgradepkg --reinstall --install-new x86_64/kde/*/*.t?z

Check if any ".new" configuration files have been left behind by
the upgradepkg commands. Compare them to their originals and decide
if you need to use them.
find /etc/ -name "*.new"
A graphical (ncurses) tool for processing these "*.new" files is slackpkg:
slackpkg new-config

notice that it looks horribly wrong? No worries.
export term=xterm
choose kdeplasma or similar plasma5

Congratulations you have now installed the newest KDE-Plasma and we should be looking a little newer with the kde interface.

This is the time to setup our package managers.
nano /etc/slackpkg/mirrors
scroll down to "current" and uncomment a line close to your location by erasing the # at the front
CL+o to save
CTL+x to exit
slackpkg update
slackpkg upgrade-all
slackpkg install xf86-video-nouveau-blacklist-noarch-1 ***This should kill on reboot the troublesome nouveau driver. Your system will default to intel gpu at this point.

installpkg sbopkg-0.38.1-noarch-1_wsr.tgz
agree to "current" repo
and exit
This gets all the package lists and installs them so that sbopkg is ready to go

Nvidia drivers:

go to enter the information and download the newest beta driver
cd Downloads
***Perperation before driver install
su and enter password to get to root
nano /etc/X11/xorg.conf
***paste in the following:

Section "ServerLayout"
Identifier "layout"
Screen 0 "nvidia"
Inactive "intel"

Section "Device"
Identifier "nvidia"
Driver "nvidia"
BusID "01:00:0"
Option "RegistryDwords" "EnableBrightnessControl=1"

Section "Screen"
Identifier "nvidia"
Device "nvidia"
Option "AllowEmptyInitialConfiguration"

Section "Device"
Identifier "intel"
Driver "modesetting"

Section "Screen"
Identifier "intel"
Device "intel"

CTL+o to save
CTL+x to exit

nano /etc/X11/xorg.conf.d/10-nvidia.conf
*** paste in the following:

Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "Nvidia Corporation"
BusID "PCI:1:00.0"
BoardName "GTX1060"
Option "AllowEmptyInitialConfiguration"

CTL+o to save
CTL+x to exit
CTL+ALT+BACKSPACE to kill the xserver
sh ./
select yes for enable 32 bit binaries
no for xconfig
no for nvidia-xsettings

When installer is completed run:

grub-mkconfig -o /boot/grub/grub.cfg
/usshare/mkinitrd/ -r
copy and paste that line in to make a new initrd.gz

You should now have a working external gpu or hybrid gpu with more power. Run glxgrears to confirm.


open terminal:

This is what I was able to see:
bash-4.4$ glxgears
Running synchronized to the vertical refresh. The framerate should be
approximately the same as the monitor refresh rate.
126652 frames in 5.0 seconds = 25330.273 FPS
127827 frames in 5.0 seconds = 25565.312 FPS
127070 frames in 5.0 seconds = 25413.889 FPS
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0"
after 53 requests (53 known processed) with 0 events remaining.

If you are seeing similar, congratulations you just set up nvidias beta driver on slackware!

Ill update this post once I figure out prime switching or similar. The display manager sddm is experiencing some problems with the latest nvidia driver, looking for workaround.

submitted by dkuchay to delllinux [link] [comments]

DNS configuration Ubuntu 20.04 Linux Ubuntu - How to Set up Language File Comparison Commands in Unix with Examples (Tutorial ... Linux Tutorial for Beginners - 8 - File Permissions - YouTube UBUNTU UNITY REMIX 18.04.1 BINÄRE OPTIONEN VERBOTEN! - ENTDECKE ALTERNATIVEN C program to find the largest number in a given file Unix & Linux: Bash function to compare two binary files ... Linux HowTo  Build Your Own Ubuntu Ubuntu sources list generator

Various options such as binary, decimal, hexadecimal, and octal can be used on the base address. Diff mode paves the way of using two different base address, for instance, the binary base address is [-a1b -a2b] The search logs and marker files are used for calculating the base address, and it is another essential feature of it. OPTIONS The following options are understood: -BINary This option may be used to compare binary files on a byte‐for‐byte basis. (Each byte is treated as a “line” by the algorithm.) Byte values are displayed in hexadecimal, as are the addresses. Note: this is different behaviour to the fhist(1) option of the same name. -No_BINary This option may be used to avoid comparing binary files ... Diff with the following options would do a binary comparison to check just if the files are different at all and it'd output if the files are the same as well: diff -qs {file1} {file2} If you are comparing two files with the same name in different directories, you can use this form instead: diff -qs {file1} --to-file={dir2} OS X El Capitan To compare file contents you simply mark two files select "file" > "compare content" and they are compared on a letter by letter basis. the default comparing tool used by krusader is the default KDE tool "Kompare". You can set krusader to use any other comparison tool - like for example one of the above. UltraCompare for Ubuntu (64-bit) is a compare and merge application for text or binary files and folders. Features include FTP/SFTP Support, line-by-line comparison of files, fast binary compare ... Ask Ubuntu is a question and answer site for Ubuntu users and developers. It only takes a minute to sign up. Sign up to join this community. Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Home Questions Tags Users Unanswered Jobs; How to compare two files. Ask Question Asked 6 years, 2 months ago. Active 1 year, 11 months ago. Viewed 295k times ... 9 Best Linux File Diff or Comparison Tools. There are several file comparison tools that you can use on Linux, and in this review, we shall look at some of the best terminal based and GUI diff tools you can take advantage of while writing code or other text files. I'm using cygwin to grep through them. These appear to be plain text files; I open them with text editors such as notepad and wordpad and they look legible. However, when I run grep on them, it will say binary file foo.txt matches. I have noticed that the files contain some ascii NUL characters, which I believe are artifacts from the database dump. Ubuntu is the modern, open source operating system on Linux for the enterprise server, desktop, cloud, and IoT. These options are used to compare the original variant file to another variant file and output the results. All of the diff functions require both files to contain the same chromosomes and that the files be sorted in the same order. If one of the files contains chromosomes that the other file does not, use the --not-chr filter to remove them from the analysis.

[index] [4624] [131] [10298] [5290] [5073] [6601] [15504] [17098] [27563] [3784]

DNS configuration Ubuntu 20.04

Download ISO Ubuntu Unity Remix 18.04.1 : Ubuntu Sources List Generator... Sisteminizdəki repo-qaynaq adreslərini istəyinizə uyğun şəkildə komplekt halda əldə etməyin ən asan üsulu bu saytdadır. İstədiyiniz ölkəni ... BINÄRE OPTIONEN VERBOTEN! - ENTDECKE ALTERNATIVEN Jetzt Binäre Optionen Verbot umgehen: Direkt zu Finmax*: Risikowarnung: der ... Website - GitHub - Reddit - Twitter - https://twi... Various file comparison commands include cmp, comm, diff, dircmp, and uniq. Check the full tutorial here: How to set line number and indent a file in Linux. - Duration: 1:18. basant subba 3,492 views. 1:18. Language: English Location: United States Restricted Mode: Off History Help About ... Unix & Linux: Bash function to compare two binary files Helpful? Please support me on Patreon: With thanks & praise to ... Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. BLW Online Trading Recommended for you. 43:42. Basic Javascript Projects - Duration: 3:01:08. Coding Addict ... Linux File System/Structure Explained! ... Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. BLW Online Trading Recommended for you. 43:42. How to Make Money ... Published on Aug 15, 2015. How to install any flavor of Ubuntu from the Network Installer or starting out with Ubuntu Server. It is best to use the Network Installer because the Server edition is...