Showing posts with label perforce. Show all posts
Showing posts with label perforce. Show all posts

26 October 2017

Review of our code review process

August 19th 2016, twelve days before the deadline, the release of another expansion of the game world of Kweetet. I read this article online that absolutely convinced me that we had to start with code reviews, and we had to start now. The stats spoke for themselves. 80% of the bugs are found during code review? Sign me up!

Did you read the above line? “80% of the bugs are found during code review”. I’m still baffled by that.

When you’re rushing for a deadline that’s when the most mistakes are introduced, so I wanted to start with code review immediately, to capture as much of them as we can. It turned out this was very successful, we did find bugs that would have been very embarrassing in a release.

There are a lot of options in setting up a code review process, for example as described in this article, which is a pre-merge review process with pull requests and is by far the most common model, certainly when using git.

At die Keure we use perforce for the development of Kweetet, which does not know the concept of pull requests. This forced us to use a post-check-in process. More specifically, we used a MOB code review process (how cool a name is that?).

Now, a good year later, we’re still doing code reviews and didn’t change much to our initial process.

Context

Maybe I should start with a little context, because although our process works well in our team, there’s no guarantee it would work in another. Actually, it turned out it didn’t work for another team - more on that later.

Kweetet is played at school and at home during the six years that a child attends primary school. It is a story driven adventure game, where children complete quests, interact with numerous NPC’s, collect stuff, play puzzles and minigames and a lot more. A child starts playing the game in the first grade and only finishes it in the sixth. Every school year they start a new chapter in the story. And every chapter is played in another world, which means a lot of content. And a lot of code.

We have apps for Windows, Mac, iOS and Android plus a “lite” version for Android and iOS. Kweetet can't be called a small project; I didn’t count the number of lines, but we have a good 7000+ files with code, not counting the server code. This code has been written by a team of 4-5 people (seniors, juniors and interns) over the course of four years. In total, 10 programmers worked on the code.

We could have done this faster, but we faced a major technical hiccup in the middle of our development: the game used to be played directly in the browser via the Unity webplayer plugin, but alas, browsers decided they didn’t want any plugins anymore forcing the webplayer to retire. We tried to convert the game to WebGL, but we learned the hard way that this technology, promising as it is, has no way near the performance (yet) as the webplayer had. This forced us to convert the game to installed versions on several platforms. We lost a lot of time on that.

So in short the context is: lots of code, a small team with interns coming and going, over four years. Only the last 1.5 years we had code reviews. By now it’s safe to say: we should’ve done this by the start, we would’ve been ready perhaps a year earlier.

As said before we don’t use git, which is a common tool in code review processes, but Perforce. Completely unlike git-flow, the whole team works on the trunk (and for good reasons, but not in the scope of this post to elaborate on). So we’re actually reviewing commits rather than pull requests, if put in a git context. There is collective code ownership.

Our process for MOB code reviews

  • Right after lunch, we sit down with some team members for maximum 45 minutes (it’s often shorter) and review all commits since the last review session. Right after lunch, because then we don’t interrupt anyone’s “zone”.
  • Anyone who wants and/or has time can join, but you’re not obliged. In general we’re doing these reviews with 3-4 people; with more, the discussions can become lengthy.
  • We do this reviews on a projector, with someone behind the keyboard going through the files. There’s no designated driver, anyone who wants can take the wheel.
  • We don’t exceed the 45 minutes limit, anything that isn’t reviewed we review later. Programmers note the bugs that are found and fix them afterwards before continuing on what they were doing.

These are not hard rules, if no one is free, or there haven’t been many commits, the review is postponed. Sometimes we skip to important parts first and then review the other things later.

We try to keep to a daily review, but at least once a week it happens that we skip one.

The good

Less bugs

Obviously, we have less bugs. We don’t measure anything, but we do find mistakes regularly, so these are all bugs that are squashed. This is actually the least important advantage (but an advantage nonetheless).

Better code

The more important advantage is not so much that there are less bugs, but that the code is just better. It becomes more readable, it is more performant, it is better structured.

The above situation is one that could occur in the past, but doesn’t anymore.

Because we talk with each other on a daily basis about our code, we discuss the chosen solutions on problems. We share a lot of views and opinions with each other, we discuss why we prefer certain approaches, we share ideas, we tell each other what we perceive as readable code, we have many good laughs.

Thus the code became very consistent, all members of the team write code in pretty much the same way now, and we all changed our ways to get to that point. Voluntarily.

Better developers

This made us all better developers, we learned solutions to problems we would’ve never thought of, we write better and more performant code, we all know and understand almost the entire code base.

Better team

It made us a better team. We have our share of heated discussions, certainly at the start, but it all happens in mutual respect and it made a stronger team. There was also a lot of humor. A lot of it :)

Respect and team spirit among team members can really improve the quality of the code, since everyone is open for improvement and feedback. These code reviews introduce that.

More resilient

We are more resilient to changes, by which I mean members of the team entering or leaving. When a team member joined us (often interns) they quickly learned the codebase and also learned our approaches. On the other hand we were challenged by these new team members; we had to defend to them why we did the things the way we did them.This brought new insights to the team and to the new member.

When we weren’t doing these code reviews, we often ended up with very alien pieces of code in our codebase written by interns who didn’t know any better. We still suffer from that.

And when a team member left, everyone knew the code he/she worked on so there is little to no need of knowledge transfer sessions.

Collective code ownership

Because everyone has seen the code grow, has discussed the changes, had the chance to contribute (even without writing any code), there is a stronger sense of code ownership among the team over the complete codebase.

We talk to each other

Instead of writing our remarks behind a computer, we sit together and talk to each other (Of course, this is only possible when your team is not spread over different locations in the world.). Differences in opinion are voiced immediately and can be discussed in an open manner. Misunderstandings are detected early. It creates a stronger team spirit.

It’s fast

Typing a sentence is not as fast as saying it. And when you’re typing, you really need to weigh your words, lest they are not misinterpreted. Discussions in text can generate a lot of back and forth messages, costing more time.

Less delays

The review of pull request can take time before someone had time to do them, and by the time you get the feedback you’re already on working on something else, as happened with the author in this article. By reviewing commits rather than pull requests, we review code very early, even while it’s still in development. This can seem obsolete, since the programmer might detect his own errors before he’s done, but more often the programmer is just helped by this. Even better: if we notice that a programmer is implementing a system fundamentally wrong, we detect it before much work has been wasted. This is especially the case when interns join the fray.

The not so good

It didn’t work for the webdev team

We also have a team of four web developers who tried to implement the same process, but they stopped after a few months. They considered it a waste of time and instead started to review pull requests in git from time to time via the computer. Discussions happen via slack.

Perhaps important to note is that they used to work via a weak code ownership model (each programmer is responsible for a separate part of the code or a separate project), this makes the open code review process not such a good fit.

We mostly review code

Since we review the commits one by one, file by file, we’re mostly considering small local changes to files and new code files. We’re not looking at the big picture in a structural manner. However this is not such a bad downside; although we don’t review systems in a consequent process, we often discuss the architecture of the game when a related issue pops up

We can get behind

Sometimes we get behind with our reviews, when most of us can’t attend the review, we skip it. We try to keep this to a minimum, but sometimes we skip a few days, and then it’s possible we have a lot on our plate. Nothing to do about that, we just do the work and sometimes extend the duration of the review session a bit. The problem is that this can become a drag and that causes us to be less attentive. We try to avoid this problem as much as possible, by having review sessions nearly every day.

No history

An advantage of written code reviews is that you have a history of them. You can refer to previous reviews for motivation of arguments. You can cross check how you dealt with similar issues in the past. This is something we don’t have in the process. Can’t say I miss it though.

On the spot

In the proposed process, we need to review and spot defects on the spot. This can be hard sometimes. It’s a big advantage of reviewing pull requests on the computer: you can take all the time you need to review a file/codebase and you can even take a complete checklist and go over the code at length.

Conclusion

The team and myself all feel that this MOB code review process improved our code, our product and ourselves. For a small co-located team, working on the same project, I think this is a good approach.

Feel free to share your own code review experiences below! I'm starting as a lector in game programming shortly and I'm playing with the idea to incorporate code reviews in the lessons, so any feedback is more than welcome!

16 December 2016

Why I can't recommend Git for game projects.

As mentioned before on this blog, I have worked with many source control systems in the past 15 years; SVN, CSV (boy that's old now I that look up that link, I was a student while I last had to use this), Plastic SCM, Git and Perforce.

At work, the idea has started to rise to use Git for our projects. Most of these projects are websites, written in php, javascript, html and css. I am very fond of the idea to start using Git for those, it opens many many doors to easier ways of life.

We can use Travis CI, we deploy everything automagically on AWS, we can work offline and most importantly: we can branch all we want.

Definitely read this article, "A successfull Git branching model". This branching model known as "Git-Flow" does indeed ensure that you have very few merge conflicts. It is the basic model that is also recommended by Sourcetree.

Basically, it boils down to this:

  • There is a master branch
  • There is a dev branch
  • Create branches per feature, only merge them with dev
  • Create branches per release, merge them with dev and master
  • Create branches per hotfix, merge them with dev or the active release branch

And that looks like this:

I believe this is a decent model for using git. In web development.

In game development however, I don't think its the right tool for the job. Since the discussion to start using git began to rise at work, I found myself defending this point of view more and more often. I believe git isn't fit for game development, but it is hard to come up with a compelling case why this would be, when challenged at the spot.

Notice that I write "I don't think it's the right tool". I am not sure. And by writing this article and doing research for it, I hope to be more sure and build a good argument for this thesis.

The situation

First, try and follow me on this. Say you have a photoshop file, a psd. This psd file consists of one hand-painted layer. This psd is in the product that we want to ship, build features for, fix mistakes in, etc.

(I agree this is a contrived example, since you would never create a psd like that, but I'll make an analogy later that will justify this.)

I want this psd in source control, so I don't lose any progress. Thus, first commit on the master branch, we push to dev and start working on the psd. We draw a landscape, we add a tree, we add the sky. Cool, first version is finished, we merge with master.

We start working on the next feature: a tree house in the tree, we do this on a feature branch.

Feature

But! We also need to hotfix the release while we're working on the treehouse! It was required that the tree had leafs and seven apples but we forgot. So we create a hotfix branch to add the apples, and merge that back to master and to dev.

Hotfix

Up until now, no problemo. But when we want to merge the feature branch with the tree house into dev we have an issue: binary files don't merge, so we need to choose one of both, the apples or the treehouse. And even if binary files could be merged, how would you merge hand painted strokes in an image? Probably best if we just redo the apples on the feature branch and merge that into dev.

Do we just multiply?

A possible "solution"

One psd file containing the whole project clearly won't work, so let's split the features in the file into layers, and let each layer reference another psd in which we draw each separate feature. To follow the same example, we start with a psd in the master branch, push into dev and add a layer with the landscape, a layer with the sky and a layer with the tree. This goes back to master, first version released.

Now the same thing happens: we start a feature branch where we add a tree house. We also do the hotfix and create a layer with apples. Both get merged into dev but we have a merge conflict. We need to add both the apples layer and the treehouse layer into the same file. But which one goes on top of the other? It may very well be that there is a possible order, but it could also be that this doesn't work either way:

No 7 apples
The house is behind the leaves

It's the same for games

Now the idea is as follows: the psd is a lot like a game. In fact, the situation for a game is a lot worse than with the psd.

  • A game consists of many textures, who all have the problem as the layer-less psd. They can't be merged.
  • Animations idem dito
  • Models idem dito
  • Audio files idem dito
  • Prefabs idem dito
  • Scenes idem dito

Prefab and scenes are a lot like the psd with the layers, they reference other assets. Even if you would be able to merge scenes and prefabs, changing parts of a scene or a prefab in different branches can easily break the scene/prefab.

Point is, code files and text files are easily merged, but colors, images, shapes and sound are impossible to merge in a reliable way.

Are there any solutions?

(I focus on Unity here, but most of the items here are applicable to all other game engines too.)

  • A typical workaround is to submit these files only when they're completely done, so you will never need to merge them. But this is a workaround, when you do need to have an unforeseen change then you're back in the same situation. And if the files never change, why are they in version control in the first place?
  • Another workaround is to try and have as many files as possible in a mergeable format. Scenes and prefabs for example can be stored as text instead of binary data. With Unity's SmartMerge you can even easily merge the complex YAML files in the editor! You would have to, because a merge conflict in GIT makes the YAML file unparseable and you can't review the changes anymore in the editor.
    This tool is a god-send, but still won't fix all issues. It can make a file parseable again, but it won't stop treehouses being placed in the wrong tree. The artist will have to redo a part of his work.
  • Another and often used workaround is to put as little as possible in a scene and split everything into small prefabs. This will indeed avoid many problems, but it can be very cumbersome to manage. And it's just another case of the psd with layers, eventually the parts won't mix at some point and you'll have to redo work.
  • The same article presents another workaround: ensure that only one person can work at the same time on the same file. Git has no support for this. And even if it would (like Perforce does), it still wouldn't make it impossible to change the same file in different branches. Even Perforce can't prevent that.

Other issues

  • Games can grow into large asset- and codebases. Kweetet for example has +300K files, 25GB in size on the client, +250GB on the server. And that's only counting the game, without all related websites, assets, scripts etc. Divinity II also has +300K files, 70GB in size on the client and +500GB on the server. As this article points out, git needs to examine all files to check whether they changed, so this is not ideal for large codebases. (Actually, reading that article, I might want to try Mercurial!)
  • Another often heard issue is that git can't handle big files very well. Since every client stores the entire history of a file this can grow out of control with many large files. This has recently been solved with git-lfs. This will only store the latest revision of a file and keep the rest on a centralized location. (Thus breaking the whole decentralized idea of git...)

Conclusion

In conclusion, I cannot recommend Git to be used for large game development projects with medium to large teams. Large projects with many large binary files cannot be easily merged, so you need to keep branching to a minimum. On the other hand, if you have a small and short-lived game project and a small team (fourish people) Git would be a valid choice.

So then what?

So what do we do for Kweetet? We use Perforce for version control and have two branches; master and dev. We only develop on dev, and work in cycles from stable dev state to stable dev state. Each time when the dev branch is stable (usually when a new feature is ready), we integrate to the master branch. We clean up any issues we find on the master branch and integrate those back into dev. These are mostly small so it does not happen often that we have conflicts. In practice, we only work on one branch.

One of the advantages of using Perforce is that the developers can open a scene exclusively, do what they need to do and check it back in. That way, there are never conflicts in the scenes.

Another big advantage is that Perforce does not need to check which files changed, you need to tell Perforce that. Of course you don't do that by hand, all our tools do this automatically when we change a file. Unity, Visual Studio, Notedpad++, Sublime and others all have perforce plugins.

If you know of a better way to deal with the issues I mentioned in this post, I'm more than happy to hear it, because I would love to give Git a second chance, or improve our Perforce workflow.

Other interesting articles/forums I read for this post

[EDIT]

Other articles I found since posting the above:

28 August 2016

Setting up a perforce server on a droplet instance.

A colleague of mine was looking for a solution to host a perforce server somewhere reliably, that could be used for small personal projects. There are not many hosted perforce solutions, Assembla is the only one I know and is expensive if you only want to use it for personal small projects, it is a complete project management tool so it's more than I need.
My colleague found this project by DigitalOcean called 'droplets' where you can have a small vps for a reasonable low price. 20GB is not big, but is enough to start personal projects on. Wanting a personal perforce server myself, I immediately started with a new droplet!

So first I created a so called 'droplet' with 20GB disk space and CentOS7. It is amazing how fast this was set up, within 5 minutes I was up and running, with putty connected to my new server, cool!

To access the server I've also created a free .tk domain and pointed it to the IP address of the server.

So far the easy part. And as it turned out the rest was easy too. A perforce installation used to be a pain in the ass because the perforce documentation is hard to find for the correct version, there is no clear path on their website to the correct documentation on any server version. It used to be copying binaries to your server, chmod +x them and start configuring. But then I happened to find this page and I literally did what it said:

  • rpm --import https://package.perforce.com/perforce.pubkey
  • vi /etc/yum.repos.d/perforce.repo
    and copy pasted the repo config from the website:
    [perforce]
    name=Perforce
    baseurl=http://package.perforce.com/yum/rhel/7/x86_64
    enabled=1
    gpgcheck=1
  • yum install helix-p4d

    Done! So we list all services to see if there was a p4d instance running:

  • chkconfig --list

    No p4d to be seen, but a "perforce-p4dctl", what is that? The best info I could find is this page. Basically it says: "documentation is in the man page" and that's it. That's so weird, why wouldn't they put the info also in their knowledge base? It would be so much easier! Ah well, back to the command line:

  • man p4dctl

    Basically it's a service to manage multiple perforce instances with easy config files. Cool, way better than it used to be! So I created a .conf file according to the template in the /etc/perforce/p4dctl.conf.d folder. I created a /home/perforce folder for the P4ROOT and switched the ownership to the perforce user

  • mkdir /home/perforce
  • chown perforce:perforce /home/perforce

    And then running a list gave me this output:

    [root@alex-droplet-1 home]# p4dctl list
    Type Owner Name Config
    p4d perforce p4-ava port=1666 root=/home/perforce

    Allright! Only one thing left to do:

  • p4dctl start p4-ava

    Aaand done! Wow, this was easy!

  • 17 November 2015

    The quest for WebGL - part 1

    Because of the deprecation of plugins on first Chrome, then Edge and now even Firefox we are forced to convert Kweetet to Unity 5 so we'd be able to create a WebGL build. In the hope that a WebGL build will even work, that is, because WebGL isn't nearly as powerful as a plugin can be (yet - That's why I don't understand why browsers deprecate plugins without providing a decent alternative. Sure, in the future WebGL will be brilliant, I believe that too, but what about the present?).

    To start we had to do some preparations, first one: convert from NGUI to the new Unity UI. The reason is simple: the less code there is in the webgl build, the smaller it will be and the less errors there can be. so we're working to ditch all big third party libraries and use the "native Unity" ones.
    This step went fairly easy; NGUI is very similar to the Unity UI so the conversion was easy. There are a lot of improvements that made our life even easier than before.

    The second step then was to actually make sure the project works in Unity 5 (the NGUI conversion was done in 4.6.x). This conversion was, in short, "less smooth".

    Don't get me wrong, I'm a big fan of Unity and I understand most of the decisions they've made when changing Unity so I'm with them all the way, but I feel that they are rushing Unity 5 a bit too hard. The features they introduce or change have many bugs and are sometimes badly documented. After a month's worth of work we didn't have a stable version of the game at all. We still have the 4.6.8 branch though, so we can continue to develop, but the 5.2.x version has many issues. Positive is that forum posts get answers quickly from the Unity crew and things are getting fixed.

    My intent is to document as much as possible all issues I encounter and what I did to solve them, in the hope that this is helpful for others. I want to do this as I go along, since the issues remain fresh in memory when I write this, so this will be a post in multiple parts.

    1. Perforce

    We use the P4Connect plugin from perforce itself, version 2.6. This is a 32-bit version we needed to upgrade, the latest version is 64bit compatible but doesn't work at all; it keeps losing the perforce connection when pressing play or other actions in the editor, rendering it completely useless.
    This is not a Unity feature you say? I agree, but the only reason we use the P4Connect plugin is because the default perforce integration in Unity isn't working either, or at least never did for me. For example, P4Connect checks out every file you youch, together with the meta file and it even moves files in perforce when you drag them in the editor. The native perforce integration did not do this. At the moment we got it up and running with the 2015.2 version. The currently latest 2015.3 version however is not capable to create a correct connection with the server, so don't get that one.

    2. VSTU integration is/was broken.

    Microsoft and Unity announced together that Unity has now VSTU integrated. I was very happy to hear that because I'm an avid user of that plugin. But I was very disappointed of how the result turned out to be. There are numerous bug reports on this matter, for example here and here. Since the 5.2.2 version things stabilized a bit, but it's not as good yet as it used to be. For some reason the project needs to reload very often, setting up a new perforce connection every time that happens and resharper re-parsing all files anew. This wasn't the case with the old VSTU.

    3. Asset bundles have changed - a lot

    The folks at Unity clearly saw that many people created complex assetbundle creation setups for their games, so they came to help and created one of their own that would cover most use cases. It took me quite some time to convert our system to theirs. They did similar things as we did, but at first the new system was badly documented. Now they created a magnificent sample project and extensive documentation. Things would have been easier if I had it at the start :)

    It surprised me a bit how much the system had changed, because I didn't feel the need. It is now supported to define assetbundles in the editor, but for a big production I cannot see that as feasible. Manually assigning the correct assetbundles for each asset in the game sounds like a tedious and very error prone work.

    At Unite Europe 2015 there was a presentation about asset bundles, very interesting and very similar to our own system, but alas only applicable to Unity 4.x. A bit weird to see that presentation there while everything else was about the new Unity 5.

    Next post I'll elaborate on our webgl version.

    27 October 2012

    ccnet config in source control

    Two posts ago I had a comment from Ignace in which he suggests the idea of having the configuration file of ccnet in the cvs (perforce in his and my case, but the idea applies to other systems as well). This is indeed a brilliant idea, because if you do that, the state of the code on the cvs will match the state of the config file (or build script, what it actually is). This is something you really need because more often than not the code and the way you build it are thightly coupled. It made me put on the thinking cap.

    You cannot put the config file next to the code base, because
    • There are multiple projects in one config file, so in what project would you include it then?
    • There are multiple branches of a project, so what branch should define the build process for all other branches? What with merging branches then?

    Another problem is that the server who uses the build script is a server. It cannot open up p4v and get the latest version. So if this needs to be done manually you can just as well just edit the script on the server an be done with it. The only reason to have the script on the cvs would be for logging purposes and backtracking. Not a good motivator, people would easily forget to update the script on the cvs, thus having the config out of sync which defeats the whole purpose then.

    Googling a bit on this topic brought me to this post, which is actually the perfect solution for above issues. The config in the post did not work, at least not on the 1.8.2 server and with perforce, so here is my version:

    <project name="Config File Monitor">
        <workingDirectory>%depot-root%\BuildServers\ServerName\</workingDirectory>
        <triggers>
            <intervalTrigger seconds="60"/>
        </triggers>
        <sourcecontrol type="p4">
            <view>//depot/BuildServers/ServerName/ccnet.config</view>
            ... and the other params you need ...
        </sourcecontrol>
        <tasks>
            <nullTask />
        </tasks>
        <publishers>
            <xmllogger />
        </publishers>
    </project>
    

    Add this project to your config file, place the file at the location specified in it (use a separate folder for each server) and change the path in the ccservice.config file. The project will check every minute if there is a new version of the config file and update it if needed. Since the server is monitoring that file it will automatically reload the configuration. I have not noticed any performance issues on the server of checking perforce every minute. Also note, I really needed to add the nulltask, otherwise it wouldn't work - no idea why, never bothered to check why.

    I wanted to test this out for a while before posting about it, but now I know I'm very happy with this solution. We don't have to give special access anymore to some path on the server for users to be able to change the build config, they can just use perforce. The versions are in sync with the builds, and since you need to add a new project for every new branch it's good that the config file resides in its own folder - no messing with branches or anything. People can now also always review how a build is made themselves and don't have to ask the person responsible for the build server how the the build is made. A fine idea by Ignace and a fine solution I've found.

    Have fun with this!

    19 April 2012

    Perforce on a freenas

    Lately I installed an old pc with freenas 8.0.2 for backup purposes as well as a centralized depository for our home media, music and pictures.

    At both Larian and Newfort we used perforce for source control. I also have worked in the past with svn, cvs, git and plastic scm. My scm of choice remains perforce, I feel most confident with that system.

    Perforce is now free for 20 users and 20 work spaces which is a huge improvement over their former 2 users. So for my own projects at home, let's install perforce on the freenas. Mind you, I have almost no experience with linux/unix (I worked with some debian version at the KULeuven) so I made a lot of beginner mistakes.

    1. Make sure to activate the ssh service on the freenas, that way you can use putty to login on the freenas via the network instead having to be connected to the pc. You start in the C-shell, which is important since some commands that are available in csh are not in bash and vice versa. For example, setting environment variables in bash is done with 'export', while in csh it's done with 'setenv'.
    2. I have a disk of 122 GB where I created a ZFS volume on called p4disk.
    3. Then I setup the folders, get the p4 and p4d binaries and make them executable
      cd /mnt/p4disk/
      mkdir bin
      mkdir p4root
      cd bin
      wget http://www.perforce.com/downloads/perforce/r11.1/bin.freebsd70x86/p4d
      wget http://www.perforce.com/downloads/perforce/r11.1/bin.freebsd70x86/p4
      chmod +x p4 p4d
    4. On another drive I created a folder to store the journal. Since you need this to restore the server it should not be on the same drive as the server.
      cd /mnt/TERRA01
      mkdir p4
    5. Then we need to make sure the server is run when the freenas boots. It seems like there are a million ways to accomplish that (I hate that) and this is how I did it. If there is a better way please tell me :)
      mount -uw /
      cd /conf/base/etc/rc.d/
      nano perforce
      The first command is needed to make the drive writable, and then we need to add our service in the rc.d folder. But not in the /etc/rc.d/ folder since that one is always reset after a reboot. 'nano perforce' opens the nano editor to write our service script.
    6. Paste this:
      #!/bin/sh -e
      export P4ROOT=/mnt/p4disk/p4root
      export P4PORT=1666
      PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/bin:/mnt/p4disk/bin"
      p4d -J /mnt/TERRA01/p4/journal &
    7. Save the file and continue:
      chmod +x perforce
      cd /conf/base/etc/
      nano rc.conf
    8. Go to the last line and add:
      perforce_enable="YES"
    9. Reboot and done!

    If I read these steps again it seems simple enough. I had some weird issues though, all because I'm not that Linux-savvy. A difficult one was that my server was very slow. If I called 'p4 info' on my PC, it took 5 seconds to get a reply. After much googling I found out this was due to a reverse dns look up on the server that failed. Some perforce calls, 'p4 info' among others, perform a reverse dns look up and that slowed it down.

    The solution offered by perforce was to include all clients in the hosts file, which is not much of a solution if you ask me. I eventually found out that the server, and all my PC's in the house too, received three ip-addresses of dns-servers from the router: the router itself and two from my ISP. However my router is not a dns server, so all reverse dns calls on my router failed, causing the delay. A firmware update for the router is not available, so unfortunately I had to set the dns ip's from my ISP by hand on the freenas. From then on, perforce ran smooth. Gonna buy myself a new router I think.

    With the steps I describe the perforce server is run by the root user. Some other how-to's, like for example installing svn on a freenas, say I should've created a perforce group and user and let that user run the server. I didn't bother and have no idea whether or not that's a good thing. If not, tell me why :). The main reason why I skipped it was because the 'sudo' command wasn't available on the freenas and I could not find how to install it.

    Now that all runs fine, I can start experimenting with everything I read in my Effective C# book. I was afraid that the book wouldn't be as good as I hoped, but it turns out to live up to expectations. I'll post something about what I've read later.