问题描述
我想用spin_lock嵌入释放的结构。
I want to free a structure with spin_lock is embedded.
情况如下:
我有两个功能
f1 () {
*****************
spin_lock_irqsave(&my_obj_ptr->my_lock, flags)
....
.... ........................................ here f2 is called
spin_lock_irqstore(&my_obj_ptr->my_lock, flags)
*******************
kfree(my_obj_ptr)
}
和F2也有类似的内容与F1。
and f2 has similar content with f1.
在F2被称为my_lock正在使用,必须F2忙等待。
然而,当F2正进入临界区,my_obj_ptr是免费的。
所以内核崩溃...
when f2 is called my_lock is being used, f2 must busy waiting.however, when f2 is entering the critical section, my_obj_ptr is free.So kernel crash...
我想现在是增加在结构my_obj一个ref_count变量
What I think now is adding a ref_count variable in struct my_obj
在spin_lock_irqsave => ref_count ++
before spin_lock_irqsave => ref_count++
在spin_lock_irqstore => ref_count -
after spin_lock_irqstore => ref_count--
和免费前,检查ref_count变量。
And before free, check the ref_count variable.
和现在看来没有崩溃。
And it seems no crash now.
我不知道会不会有其他的情况,我不认为?
或者有什么更好的办法来处理这个问题?
I just wonder will there be corner cases, I don't consider?Or there is any much better way to handle this problem?
任何帮助将AP preciate。
Any help will be appreciate.
感谢
推荐答案
追踪Linux的源$ C $ C后,我发现解决此类问题的一个例子。
After tracing the linux-source code, I found a example of solving this kind of problem.
正确的方法是对用spin_lock从靶释放结构分裂,以便它不会靶释放对象被销毁之后被破坏。
The correct way is to split the spin_lock from the target freeing structure, so that it will not be destroyed after target freeing object is destroyed.
和检查免费目标对象为空或不是在锁保护的区域,这样你就不会再释放它。
And check if the target free object is NULL or not in the lock protected region, so that you'll not free it again.
所以样品code可以如下:
So the sample code may be as follows:
f1 () {
//Notice that my_lock is not in the structure my_obj any more
spin_lock_irqsave(my_lock, flags)
.....
//check if my_obj_ptr is NULL or not
//And kfree must be done in the lock protected region
//otherwise it will not be safe
if (my_obj_ptr != NULL)
kfree(my_obj_ptr)
......
spin_lock_irqstore(my_lock, flags)
}
这篇关于动态分配/释放结构在linux内核嵌入锁的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!