Define Your Application’s Expected Behavior
明确你预期的效果,按照任务优先级以及最优方式等。
Factor Out Executable Units of Work
尽可能的拆分小你的任务,不用太过于担心多线程将会给系统带来的开销,毕竟系统提供的类(gcd operation)相对thread开销小很多。
Identify the Queues You Need
清楚你需要的线程队列。
如果使用的是block的方式执行任务,有指定的先后顺序的话可以使用a serial dispatch queue
(串行队列),如果没有可以使用concurrent dispatch queue
(并行队列)。
如果用的是operation
的方式,实现是队列需要你配置具体的依赖条件,以保证任务的顺序进行。
Dependencies prevent one operation from executing until the objects on which it depends have finished their work.
Tips for Improving Efficiency(提高性能的小技巧)
除了上面提到的可以通过将任务分解成小的任务并发或串行的方式运行来提高应用的性能外,以下还有几种方式:
- Consider computing values directly within your task if memory usage is a factor.
尽可能在你的任务内部直接运行值处理运算,而不是读取内存缓存值。毕竟直接运算使用的是处理器的寄存器和缓存空间,物理性能上比内存快的多。
If your application is already memory bound, computing values directly now may be faster than loading cached values from main memory. Computing values directly uses the registers and caches of the given processor core, which are much faster than main memory. Of course, you should only do this if testing indicates this is a performance win.
- Identify serial tasks early and do what you can to make them more concurrent.
提炼出需要串行运行的任务,并尽可能的使其并发化。比如说一些共用同片数据空间的task,可以通过数据拷贝的方式,甚至是程序结构上的优化来消除数据公用任务并发。
If a task must be executed serially because it relies on some shared resource, consider changing your architecture to remove that shared resource. You might consider making copies of the resource for each client that needs one or eliminate the resource altogether.
- Avoid using locks.
尽量避免使用线程锁。可以使用( a serial queue)串行队列或使用operation object dependencies(依赖关系)来实现数据的顺序操作。- Rely on the system frameworks whenever possible.
当系统框架有可以API可以实现你需要的功能或并发性的要求时,尽可能使用以获得更好的性能体验。
Performance Implications
竟可合理的使用并发,毕竟我们最终的目的是为了提高应用的性能。
性能测试看这节Performance Overview.
Concurrency and Other Technologies
Factoring your code into modular tasks is the best way to try and improve the amount of concurrency in your application. However, this design approach may not satisfy the needs of every application in every case. Depending on your tasks, there might be other options that can offer additional improvements in your application’s overall concurrency. This section outlines some of the other technologies to consider using as part of your design.
OpenCL and Concurrency
In OS X, the Open Computing Language (OpenCL) is a standards-based technology for performing general-purpose computations on a computer’s graphics processor. OpenCL is a good technology to use if you have a well-defined set of computations that you want to apply to large data sets. For example, you might use OpenCL to perform filter computations on the pixels of an image or use it to perform complex math calculations on several values at once. In other words, OpenCL is geared more toward problem sets whose data can be operated on in parallel.
Although OpenCL is good for performing massively data-parallel operations, it is not suitable for more general-purpose calculations. There is a nontrivial amount of effort required to prepare and transfer both the data and the required work kernel to a graphics card so that it can be operated on by a GPU. Similarly, there is a nontrivial amount of effort required to retrieve any results generated by OpenCL. As a result, any tasks that interact with the system are generally not recommended for use with OpenCL. For example, you would not use OpenCL to process data from files or network streams. Instead, the work you perform using OpenCL must be much more self-contained so that it can be transferred to the graphics processor and computed independently.
For more information about OpenCL and how you use it, see OpenCL Programming Guide for Mac.
When to Use Threads
Although operation queues and dispatch queues are the preferred way to perform tasks concurrently, they are not a panacea. Depending on your application, there may still be times when you need to create custom threads. If you do create custom threads, you should strive to create as few threads as possible yourself and you should use those threads only for specific tasks that cannot be implemented any other way.
Threads are still a good way to implement code that must run in real time. Dispatch queues make every attempt to run their tasks as fast as possible but they do not address real time constraints. If you need more predictable behavior from code running in the background, threads may still offer a better alternative.
As with any threaded programming, you should always use threads judiciously and only when absolutely necessary. For more information about thread packages and how you use them, see Threading Programming Guide.