Good news --- there's no end of research projects :-) First, I'll pass down some advice: don't create a new programming language, we don't need another language. Okay, now some topics off the top of my head:
- tools: static analysis (find errors, identify parallelism in sequential code, ...), run-time analysis to detect errors, debuggers, profilers that can tell us about power usage / cache effectiveness / multicore effectiveness
- rethinking standard algorithms for parallelism, caching, power usage --- e.g. what's the best sort with regards to caching? power usage? Parallelism? This is done, but lots of other algs to revisit
- rethinking data structures... For example, "Linked lists are, like, so last century".
- compiler optimizations for power? caching? parallelism?
- can we make transactional memory practical? What are other new, novel ideas to add to existing languages?
- we have threads, futures, and tasks. What's the next imperative language extension? What about GPUs? Many-core where we have heavy-weight CPU cores and lighter-weight (aka power-friendly) ARM-like cores?
- education? When and how to teach all this?
And this is just parallel processing from the single-machine core perspective. What about multi-machine parallelism, aka HPC? Exascale? HPC in the cloud? Oh gosh, there's no end of work to do, which is what makes it so exciting...
A research topic: How to modify the C++ AMP compiler/runtime, so that using the C++ AMP simple model, takes advantage of tile_static memory automatically under the covers.
For reference, the C++ AMP open spec: http://blogs.msdn.com/b/nativeconcurrency/archive/2012/02/03/c-amp-open-spec-published.aspx