As far as I understand, the game at the core of pioneer is that in order to perform well, a player has to show progress. The rate of progress is determined by peers (i.e. other players).
However, peers themselves have very little time to deeply review a project, therefore a players' progress ends up being determined from the players' explanation of the progress and a cursory evaluation of the project's landing page. This means that the entire game is optimizing for better landing pages and ideas that resonate with the peers.
This leads to the same problem that most research deals with in peer-reviews. Original ideas will rarely come out of conferences that employ a peer-review mechanism. Most interesting ideas are presented as journal-first papers (where ideas are evaluated by editors). That's the general trend in PL and software engineering research.
Therefore, a few recommendations.
1. It might be better to think about reducing the number of progress evaluations from 10->5->3. This will give peers more time to actually evaluate the project.
2. Like in a journal paper submission, add a high-level player (, ideally someone from the pioneer team,) to serve as a super-node. The high-level player conducts the peer-review on a weekly basis and has the final say about the projects'-progress. Moderation of peer-review, I guess, is the goal here.
3. The hot-or-not style game is a good starting point, but I would try to experiment with a rank-based evaluation. In a rank-based evaluation, peers see 5 projects and answer questions about them; in the end, they rank the projects from 1 to 5 with 1 being the highest rank. This will give you more fidelity in terms of actual performance.