Project Work: Week 7

Expanding the Dataset

After the results I obtained last week, this week I focused on expanding my dataset to try and get better results. Before I did this however I ran a few experiments to get a better idea of the situation.

Preliminary experiments

Before I tried to expand the dataset, I wanted to make sure that doing so would be a worthwhile time investment. Thus I performed a few experiments to understand some more about what affects the quality of AtlasNet results.

First, I took 1000 airplanes from the ShapeNet database (this was only about one tenth of category’s models) and set everything up to train AtlasNet on these models. Training proceeded as usual for 120 epochs, but this took considerably longer than my previous training sessions since the amount of data to go through was an order of magnitude greater. However, once training did complete I was satisfied to find that the model did in fact show good results when I performed reconstructions afterward.

1000 Plane training 1000 Plane reconstruction

These results confirmed to me that if I had enough data, theoretically I can get good results from AtlasNet. However I was still troubled by the fact that I had not been able to get the model to overfit successfully with just one training example (it should be able to). I had discovered quirks with the network needing a minimum number of training examples to run at all, but just increasing the number of epochs for training did not seem to work when I tried it.

Next I wondered if the model really is sensitive to the actual number of training examples, and duplicated one model 300 times to get a fairly large dataset and tried running on that. Here in the visdom tool I saw very good training set results, but still poor validation set results (even though the validation models were the same). Trying to perform reconstructions and getting more disconnected polygons confirmed that this was not working.

300 overfit reconstruction

Going into the code, the only thing I found that was different between training and validation was when the model was set to test/val mode using model.eval(). When I turned this off I was able to get good results on the validation set too and a reasonably good reconstruction of the sword model I picked (for the first time!)

300 overfit, no eval reconstruction

However this was a kludgy fix and I still need to investigate more to figure out what was causing this. Clearly something in the code makes it so that the dataset size still matters, even if you are trying to overfit. I can see that with a large enough dataset, as with the planes, even with eval() mode on one can still get good validation results.

From this point I decided there were two routes I could take: either try running with a small dataset like the one I have but repeating samples like I did before, or creating the larger dataset like I originally intended. I am apprehensive about having to rely on huge datasets since that seems impractical for my application of video game assets, where each kind of asset would then require its own (large) dataset. But since this seems like the inevitable route I have to take I decided to go ahead and begin expanding my sword dataset.

Adding to the sword dataset

Thus I began work to expand the sword dataset. Online I was able to procure seventy (70) more sword models, so that I could add them to the 30 I already have for 100 total. For each model I had to download and extract it, import it into blender, recenter and clean it up, export again, open in MeshLab, generate the point cloud, and normalize the cloud. Doing this for all 70 meshes was a time-consuming process that took several hours to complete. However, I was able to increase the size of the dataset by a factor of over 3, so it should be valuable to future experiments.

Results after new additions

With my new dataset of 100 sword models, I set up AtlasNet for training and ran training for the standard 120 epochs. This gave results in visdom that made it look like learning was happening, but the network wasn’t quite getting there. Performing reconstruction confirmed this: I got similar disjointed (but aligned) polygons reminiscent of when I tried to reconstruct the swords on the pre-trained model last week.

I wondered if more training time would help, so I tried running again for 250 epochs, which took approximately 25 minutes. In visdom the results seemed to be a little better, but reconstructions still were not up to par.

100 swords, 250 epochs

It’s possible I may need to just further increase my dataset, or else find out what is constraining the network’s performance based on the other things I have learned.

Next steps

It seems clear that having a large dataset is important for AtlasNet, and I need to find out why this is the case. However, given that I have begun to obtain working reconstructions in this next week I want to shift my attention to figuring out how I can perform random sampling on the learned feature space instead. This is what will allow me to perform the kind of novel synthesis necessary for my application. The previous Soltani, et al. network I tried had built-in code for this, but AtlasNet does not. Therefore I will need to start looking into this on my own, but also will be sure to reach out to the authors on GitHub to see if they have a solution already made.