设为首页 加入收藏

TOP

6.3 线程安全与并发
2013-10-07 00:28:39 来源: 作者: 【 】 浏览:80
Tags:6.3 线程 安全 并发

6.3  Thread Safety and Concurrency

Concurrency exists when two or more threads make progress, executing instructions at the same time. A single processor system can simulate concurrency by switching execution between two or more threads. A multiprocessor system can support parallel concurrency by executing a separate thread on each processor.

Many applications can benefit from the use of concurrency in their implementation. In a concurrent model of execution, an application is divided into two or more processes or threads, each executing in its own sequence of statements or instructions. An application may consist of one or more processes and a process may consist of one or more threads. Execution may be distributed among two or more machines in a network, two or more processors in a single machine, or interleaved on a single processor.

The separately executing processes or threads must generally compete for access to shared resources and data and must cooperate to accomplish their overall task.

Concurrent application development is a complicated task. Designing a concurrent application involves determining the necessary number of processes or threads, their particular responsibilities, and the methods by which they interact. It also involves determining the good, legal, or invariant program states and the bad or illegal program states. The critical problem is to find and implement a solution that maintains or guarantees good program states while prohibiting bad program states, even in those situations where two or more threads may be acting on the same resource.

In a concurrent environment, a programmer maintains desirable program states by limiting or negotiating access to shared resources using synchronization. The principal role of synchronization is to prevent undesirable or unanticipated interference between simultaneously executing instruction sequences.

Synchronization describes the set of mechanisms or processes for preventing undesirable interleaving of operations or interference between concurrent threads. This is primarily accomplished by serializing access to a shared program state. A programmer may choose between one of two synchronization techniques: mutual exclusion or conditional synchronization.

Mutual exclusion involves combining fine-grained atomic actions into coarse-grained actions and arranging to make these composite actions atomic.

Condition synchronization describes a process or mechanism that delays the execution of a thread until the program satisfies some predicate or condition.

A thread that is waiting on a synchronization mechanism is said to be blocked. Once a thread is unblocked, awakened, or notified, it is rescheduled for further execution.

Two basic uses exist for thread synchronization: to protect the integrity of shared data and to communicate changes in program state between cooperating threads.

An entity is multithread-safe (MT-safe) if multiple threads can simultaneously access that entity. Static class methods may support a different level of thread safety than those associated with an instance of that class. A class or method is considered multithread-hot (MT-hot) if it creates additional threads to accomplish its task.

The IEEE 1003.1 POSIX threads library defines a standard C language API for thread creation and synchronization. The Microsoft WindowsTM API possesses a number of threadrelated functions. Many C++(www.cppentry.com) class libraries provide objectoriented wrappers for the functions provided by these APIs.

64. Design for Reentrancy

Always write code that is reentrant, that is, code that operates correctly when invoked recursively by a single thread or concurrently by multiple threads. To write reentrant code, do not use statically allocated resources, such as character buffers, unless you use some form of mutual exclusion to guarantee serialized access to that resource. Examples of static resources include shared objects, shared memory, I/O devices, and other hardware resources.

65. Use Threads only Where Appropriate

Multithreading does not equate to improved application performance. Some applications are not suited for multithreading and may run slower following the introduction of multiple threads because of the overhead required to manage those threads.

Before you multithread your application, determine whether it can benefit from their use. Use threads if your application needs

■To simultaneously respond to many events, e.g., a web browser or server.
■To provide a high level of responsiveness, e.g., a user interface implementation that can continue to respond to user actions even while the application is performing other computations.
■To take advantage of machines with multiple processors.

66. Avoid Unnecessary Synchronization

Synchronization can be expensive. On many platforms, the operating system kernel manages interthread synchronization and signaling. On such systems, operations that affect thread scheduling require a context switch to the kernel, which is expensive.

Synchronization serializes access to an object thereby minimizing potential concurrency. Before synchronizing code, consider whether that code accesses shared state information. If a method only operates on independently synchronized objects, local variables, or non-volatile data members, such as those initialized during construction, then synchronization is not required.

Do not synchronize fundamental data types or structures, such as lists, vectors, etc. Let the users of these objects determine whether external synchronization is necessary.

67. Do Not Synchronize Access to Code That Does Not Change Shared State

To maximize concurrency in a program, you must minimize the frequency and duration of lock acquisition. When you use a mutual exclusion lock to define a critical section—a sequence of statements that you want to execute in an atomic fashion— you serialize access to all of the code contained in that critical section. If you include statements that do not modify shared state within a critical section, you increase the amount of time that other threads must wait before they may enter that critical section. This might reduce the maximum concurrency that could be achieved by your application.

6.3 线程安全与并发

并发(concurrency)存在于多线程环境,可同时执行多个指令。单处理器系统能通过在多个线程间切换实现并发的模拟,而多处理器系统则可通过线程与处理器的配合实现真正的并行处理。

很多应用都可依靠并发的使用获得性能收益。在并发执行模型中,应用可被划分为多个进程或线程,每个进程和线程都有专属的、单独运行的语句或指令序列。一个应用可以由一个或多个进程构成,一个进程也可以由一个或多个线程构成。进程和线程的执行还可分布到网络中两台或多台机器、同一机器上的两个或多个处理器,或者单个处理器上轮流处理。

通常,多个单独执行的进程和线程会为共享的资源和数据竞争,合作完成整个任务。

并发应用的开发是一个复杂的课题。设计并发应用,需要考虑进程和线程的所需数量、它们各自的任务以及它们实现交互的方法。此外还包括确定正常的、合法的、保持不变的以及不正常的、非法的程序状态。其中的关键是,在两个或多个线程同时访问共享资源时,要找到维护和保证正常程序状态的办法。

在并发环境中,程序员可利用同步(synchronization)实现对共享资源的限制和协调,从而保护程序状态。同步的主要作用是,阻止可能在同时执行的指令序列间引起问题,或结果不可预料的交互行为。

同步采用的手段,主要是对共享程序状态的访问行为的串行化。有以下两种同步技术供程序员使用:互斥(mutual exclusion)或条件同步(conditional synchronization)。

互斥需将细粒度的原子行为组合为粗粒度的复合行为,再将复合行为原子化。

条件同步描述的是这样一种过程或机制:在程序满足既定断言或条件前,延迟线程的执行。

因同步限制而等待的线程,处于阻塞(blocked)状态。在线程非阻塞(unblocked)、唤醒的(awakened)或告知的(notified),才会继续执行。

线程同步有两个基本用途:保证共享数据的完整性;在合作的线程间交换程序状态的变化情况。

如果多个线程能同时访问某实体,那么这个实体就是线程安全的(multithread-safe,MT-safe)。和实例的方法相比,静态类的方法或多或少拥有更高的线程安全性。如果一个类或方法为完成自己的任务创建了辅助线程,我们就认为它有多线程依赖(multithread-hot,MT-hot)。

IEEE 1003.1 POSIX线程库定义了一套标准的C环境下创建和同步线程的API。Microsoft WindowsTMAPI则包含大量与线程相关的函数。很多C++(www.cppentry.com)类库又在这些基础库之上完成了面向对象封装。

64. 可重入设计

我们编写的代码总应该是可重入的。换句话说,代码在被单线程反复或多线程并发调用时,仍能正确运行。在要求可重入的代码中,不能使用静态分配的资源,比如字符缓冲区,除非你使用互斥方法保证了资源存取操作的串行化。这里所说的静态资源还包括共享对象、共享内存、I/O设备和其他硬件资源。

65. 仅在适当处使用线程

多线程并非是应用性能提升的同义词。多线程并不适合某些应用,它们使用后因为线程管理成本,应用整体性能反而下降。

在使用多线程前,请认真分析应用是否可由此真正获益。如果满足下列条件,那么你的应用可以考虑使用多线程 。

需对大量事件同时响应,比如Web浏览器和服务器。

需要高优先级响应,如要求应用在处理其他计算工作时,仍能对用户行为做出响应。

需要充分利用机器的多处理器资源。

66. 避免不必要的同步

同步对资源的消耗极大。在很多平台中,都依靠操作系统内核实现对线程同步和信号的管理。因此,这类系统调度线程时必须切换到内核,成本很高。

同步实现了访问的串行化,因此可以降低了并发冲突风险。但在使用同步之前,应明确程序中是否存在需共享的信息。若方法只操作专属的对象、本地变量或无冲突的数据成员(如只在构造时期初始化的数据),那么同步就是不需要的。

此外,不要同步基本数据类型和结构体(如列表、向量表等)。这些对象应该由其用户决定是否需在外部同步。

67. 不要同步访问不修改共享状态的代码

要提高程序的并发性能,必须首先降低启用锁的频度和持续时间。如果你使用互斥锁定义临界区(critical section)——需要以原子方式执行的语句序列——就等于对该临界区中所有代码实现了串行化。若其中包含不修改共享状态的语句,就无谓增加了其他线程的等待时间,这将会削弱应用的并发能力。


【责任编辑:董书 TEL:(010)68476606】

回书目   上一节   下一节

】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
分享到: 
上一篇C++编程风格 目录 下一篇C++ Primer中文版 目录

评论

帐  号: 密码: (新用户注册)
验 证 码:
表  情:
内  容: