Benchmarking My FX-8320 with Core3 and TrinityCore in VirtualBox

One of the biggest disappoints I have had when it comes to computers was buying an early version of the Intel Core2 Quad Q8200, because Intel disabled their hardware virtualization support (VT-d) on it, as part of their arbitrary and consumer-unfriendly pricing scheme. That was back in 2008 and at $185, it was the best new CPU I could afford at the time. Certainly, it was better than the Pentium DualCore I was using!

When it came time to upgrade, I spent a long time researching all of my options for new hardware, so that I could not only get the best performance for my dollar, but so I could have access to all the features and functionality of a desktop. I really wanted to get into using virtual machines for something truly useful… something like modding a SWGEmu server (Core3)! I also wanted to get better performance in games, such as Planetside 2 and Guild Wars 2, but that was secondary.

This was fall 2013 and at that time, hands down the best deal was the AMD FX-8320 if you could catch on sale for $135 CAD or so (with the FX-6300 being the next best for the same price). Absolutely, a Core i7 3770 (non-K, because VT-d is was disabled on the K version…) would have been way better, but it was also $340 – $370 CAD, which was basically my whole upgrade budget. Obviously I couldn’t buy a CPU without a motherboard and RAM, so I waited until the FX-8320 went on sale and bought it. I’ve been nothing but pleased with it since – seriously, it’s a super computer!

I reused my Silverstone Heatsink/Fan tower, which is enough to keep the cpu around 45C while compiling with all 8 threads natively using gcc in Linux. It’s stock speed is 3.5GHz and it turbos up to 4.0GHz. I’ve played around with over clocking on it and it is most happy when sitting at 4.0GHz with turbo and power management options disabled. In Windows, it sits at 4GHz all the time and Linux it down clocks to 1.8GHz while idle. Letting it down clock in Windows causes noticeable performance issues while playing games and while compiling in a virtual machine, but Linux seems fine either way.

Usually I have a single VM open using VirtualBox, where I work using the Xfce desktop environment in my Debian 8 Linux guest, inside my Windows 10 host. This gives me the best of both worlds – all the GNU software I love, functioning pretty much the same as running it on the hardware directly, and all the Windows software I use (mostly DirectX based games) can take full advantage of my AMD R9 270 video card. As much as I appreciate the WINE project, honestly, Windows games work way better in Windows. A lot of GNU software on the other hand seems to work just fine in a virtual machine, which is awesome.

I like using VMs, because they are their own self contained systems that can share files with the host system and with each other, without messing each other up. For instance, while I could build TrinityCore directly in Windows and get a decent performance boost while compiling, it would also mean I would need to have a MySQL database running in the background too and… I don’t want that running all the time. Yes, I could put just the MySQL DB in a VM, but… you know what, I prefer working in Linux anyway, so it’s just better to have the whole thing as one self contained “work environment”. So that’s what I have, a VM for Legend of Hondo, a VM for helping with the Tarkin 2.0 server, and a VM for Solozeroth (TrinityCore). And some other ones, such as old timey Slackware, just because!

Anyhow, with all that background out of the way, here is what compiling Core3 and TrinityCore looks like on my machine!

Host System
AMD FX-8320 (Locked at 4.0GHz with Turbo disabled)
8GB DDR3 2133 RAM (2x 4GB)
SK Hynix SL300 250GB SATA3 SSD
Windows 10 64Bit Build 15063.138
VirtualBox 5.1.20

Core3 Environment
Debian 8.5
Linux kernel 3.16.0-4-amd64
GCC 4.9.2
Core3 (SWGEmu) 2016.10.06

TrinityCore Environment
Debian 8.7
Linux kernel 3.16.0-4-amd64
GCC 4.9.2
TrinityCore 3.3.5 2017.04.22
A chart goes here...
As you can see there, TrinityCore takes a hell of a lot longer to compile from scratch that Core3! It also uses more RAM on average and has a much higher peak RAM usage as well. Apart from that, both projects appear to scale similarly when they have access to more threads.

I should note that the 8 cores on my FX-8320, as far as gcc compiling goes, are indeed a 8 physical pieces of hardware handling one job each, unlike an Intel i7, which would be 4 physical pieces of hardware doing two jobs each. For floating point math operations, my FX-8320 only has 4 physical lumps of hardware that can only handle 4 jobs, unlike an Intel i7, which could handle 8 floating point math jobs. Thankfully the gcc compiler uses the “integer units”, of which I have 8! So with that said, if you have an FX processor and you’re working with gcc, you can safely ignore the warning in VirtualBox about assigning more CPU cores than you really have – crank it to the max and make sure you have enough RAM!

My problem is, 8GB of RAM isn’t really enough for compiling with 8 cores AND running the game in the host system. So, I tend to leave the VMs at 6 cores with 3.5GB RAM, which leaves plenty of RAM for working in both the host and the guest (running the server while playing the game, for instance – which works great btw!). Yes, that does mean that the computer takes long to compile, but nice part is that much of the time I don’t need recompile the entire projects. So in reality, most of the time the difference is more like shaving off 10 seconds from a 40 second compilation, which isn’t worth worry about.

Knocking 10 minutes off that 30 minute compile of TrinityCore might be worth the 30 seconds it takes to shutdown, move the RAM slider, and boot up though. Unless it’s lunch time or “AFK for hours on end, because distractions!” time…

On a related note, I have been thinking lately that it would be interesting to see how this compares to compiling on the same setup using a new AMD Ryzen processor or a recent Intel i5 or i7 processor. I’ve read several benchmarks/reviews, including this Linux gcc compiling related test on XDA, and it’s safe to say that yup, when you spend more money, you get a better processor!

Unfortunately, for the $135 CAD that I spent for my FX-8320 3.5 years ago, it’s still the best option for my work load in its price range. I was hoping the new 4 core, 8 thread Ryzen R5 1400 would be priced around $165 CAD, but it’s $225. The 8 core FX-8300 (a slightly lower clocked, but still fully unlocked, FX-8320) at $145 is only $15 more than the 6 core FX-6300 and honestly it’s a steal for Linux programming and VM work (which is basically the best case scenario for the Bulldozer/Piledriver based CPUs, as their 8 real hardware ALUs are great, but their 4 real hardware FPUs, slow cache, and crowded input pipeline are not so hot for stuff like playing games, music encoding, and some photo editing tools).

It’s kind of a bummer that today I can’t spend less to effectively double my performance, as I did when I made the jump to the $135 CAD FX-8320 from the $185 Core2 Q8200. I was overjoyed back then when my compile times in Rescue Girlies (based on Supertux 0.3.3, an SDL based project) were literally cut in half. That’s my kinda upgrade! Yeah, so anyway, I won’t be upgrading any time soon, because it doesn’t make sense to shell out $295 for the 6 core, 12 thread Ryzen R5 1600 (plus motherboard and RAM) that will almost double my performance. That kind of money would be better spent elsewhere, for all the difference it would actually make in my life! 🙂

When it comes time to upgrade, I am hoping that AMD will have a nice 4 core, 8 thread APU with 512 shaders for around $165. I don’t play any new games and an APU like that would give me a 15% to 25% boost in performance, while dramatically reducing the power usage of my desktop. Yes, it would have half the shaders of my R9 270, so I would probably have to dial back the graphics settings a bit but meh, my old eyes are getting blurry anyway! So we’ll see what 2018 or 2019 brings. Hopefully we’ll get some micro-ATX motherboards with 4GB GDDR6 Video RAM for the APUs, because that would be cool!

SWGEmu – How-To Add a New Slash Command

How-To Add a New Slash Command
Introducing new commands to SWGEmu, either for admin use or for players, can be a handy way to improve the end-user experience. Thankfully, it’s not too difficult to do, though doing so does require a minor client side update. I’ll take you through the process step by step. For the example, I’ll add the /helloWorld command.

Requires: IFF data table editor (TRE Explorer, Jawa ToolBox, Sytner’s IFF Editor)

1. Extract a copy of the following command data table to your working folder:

2. Open the data table in the editor and add your new command(s), one per row. Have a look at the other admin commands, then add the following entries to your new one:
– commandName: helloWorld
– cppHook: helloWorld
– targetType: optional
– displayGroup: EDA57E75

3. Save the data table, pack it into your TRE structure, and make sure it is loaded by both the client and the server. That’s it for the client side modding! If you need help with this part, see the following post on Mod The Galaxy,

4. Register the new command in the server software:

– Scroll to the bottom and add a new call under the last one listed, like so:


– Scroll to the bottom and add the include line for header file of your new command:
#include “helloWorldCommand.h”

5. Create the header file that will perform the actions:

6. Program the actions. In this case, we will simply send the player a system message that says “Hello World!”, however you can program a command to do pretty much anything you’d like. You can even run a lua function.


#include "server/zone/objects/scene/SceneObject.h"

class HelloWorldCommand : public QueueCommand {
HelloWorldCommand(const String& name, ZoneProcessServer* server)
: QueueCommand(name, server) {

int doQueueCommand(CreatureObject* creature, const uint64& target, const UnicodeString& arguments) const {

if (!creature->isPlayerCreature()) // If not player, bail

creature->sendSystemMessage("Hello World!");

return SUCCESS;


6a. If you’d like to run a lua function, make the function as part of a class (like a screenplay) and include its file in the screenplays.lua list, then add the following to your helloWorld.h after the “if not player” check.

Lua* lua = DirectorManager::instance()->getLuaInstance();

Reference myLuaFunction = lua->createFunction("MyLuaClass", "myLuaFunctionName", 0);
*myLuaFunction <callFunction();

For the Lua side of things, your command would be like so:

local ObjectManager = require("managers.object.object_manager")
MyLuaClass = {}

function MyLuaClass:myLuaFunctionName(pPlayer)
CreatureObject(pPlayer):sendSystemMessage("Hello World!")

6b. To make the command for administrator’s only, the easiest way is to simply add the following after the “if not player” check. Note that a standard player’s admin level = 0 and a full server admin = 15. There are various levels in between that are described in MMOCoreORB/bin/scripts/staff/levels/*.lua

ManagedReference ghost = creature->getPlayerObject();

if (ghost == NULL)

int adminLevelCheck = ghost->getAdminLevel();

if (adminLevelCheck != 15){
creature->sendSystemMessage("Sorry, the /helloWorld command requires administrator privileges.");

7. Compile the server code, boot the server, load the client, and enjoy your new slash command!

That’s all there is to it, at the basic level. Depending on the functionality of your new command, you will need to add more C++ or lua code to various existing or new files. Just keep in mind that everything starts with the doQueueCommand() function in your helloWorld.h header file.

If you’d like, you can also research how the lua admin levels framework operates and then add your new command into that framework as well. This turned out to be more trouble than it was worth for me, so I didn’t bother using it.