Nothing comes for free but in general the overhead caused by CDC is minimal.
There is an excellent whitepaper called "Tuning the Performance of Change Data Capture in SQL Server 2008" @ http://technet.microsoft.com/en-us/library/dd266396.aspx.
- Proposed as answer by Darren GosbellMVP Thursday, May 21, 2009 6:19 AM
We have been using CDC in production for 4 months on 128 different columns in our database and have had no impact whatsoever. CDC uses repl-logreader which is basically the same thing as replication. It is very efficient and has worked great. Takes very little CPU, and reads the changes in chunks, so even when there are massive updates, it does not bog down our disk IO.There are also configuration options which allow you to change the chunk size and wait time. I personally would not worry. If your system can handle replication, it can handle CDC.
There are some over head.
1. log reader have to search the log and save the log to internal tables. this overhead is same as the overhead of transactional replication as they are using the same log reader.
2. retention period will impact the performance. If you have longer period, you would have more issue on getting net changes.
3. as the table hosting CDC changes are system tables, I might be wrong, those tables may be located at primary file group. If you are capturing more tables on a highly volatile database, you may want to transactional replicate it first as the distribution database which is the database could be hosted in physically separated server, and than use CDC on that machine.