• 之前只学了fast_bin和稍微了解了一下其他bins的结果和运行操作,一直都没去碰tcache_bins今天就来具体了解一下tcache_bins的整个运行流程和具体细节。

  • tcache_bins是在glibc2.26新添加进来的,由于是刚引入,所以不是很安全,之后在glibc2.27中的tache_bins才相对稳定一点,所以在PWN比赛题目中,更经常遇到的是glibc2.27版本的tcache_bin漏洞利用。在本文的最后会附上glibc2.26glibc2.27中与tcache_bin相关的源代码。

  • 参考博客:

介绍

简单介绍

  • tcache引入的目的是加速内存的分配,但是引入该机制后安全性就大打折扣了。由于更经常提到的是glibc2.27版本的tcache,所以本文介绍就以glibc2.27版本的tcache为主。

  • tcache全名为thread local caching,它为每个线程创建一个缓存cache,从而实现无锁分配算法。

  • tcache的具体描述:

    • tcache_binfastbin类似,都是一个指针数组,每个指针数组都用来管理空闲的堆块。这些堆块也是被组织成链表,这些链表都是采用后进先出的规则。
    • tcache_bin中,一个数组元素(是指向空闲chunk的指针),管理着相同大小的chunk,不同大小的chunk,会被放入其他数组元素中管理
    • tcache_bin中,每个数组元素管理的空闲堆块(即链表节点数)不能超过7个。即当tcache_bin一个数组元素管理空闲堆块为7个时,当释放第8个相同大小的堆块,该堆块并不会被放入tcache_bin中,而是直接被放入fast_bin或者其他bins
    • tcache_bin数组一共有64个元素,也就是tcache_bin可以管理从chunksize位为0x20chunksize0x410大小的堆块。当申请的chunksize位大小为0x420时,释放该chunk,该chunk就会被放入unsorted_bin中。
  • tcachebinfastbin的相同点和不同点:

    • 相同:tcachebinfastbin都是单链链表管理chunk,并且每个chunk都是利用fd指针。
    • 不同:tcachebinfd指针指向的是chunk的fd指针的位置,而fastbin中的fd指针是指向pre_size的位置
  • 接下来用图片描述一下tcache_bin

image-20250121192716388

  • 在源码中,使用如下代码来管理tcache_bins
    • 所以这边有一个数组 char counts[TCACHE_MAX_BINS];是统计每个索引里面链表的个数
    • tcache_entry *entries[TCACHE_MAX_BINS];这个就是上图所示的一个管理空闲链表的结构
1
2
3
4
5
typedef struct tcache_perthread_struct
{
char counts[TCACHE_MAX_BINS];
tcache_entry *entries[TCACHE_MAX_BINS];
} tcache_perthread_struct;
  • 所以上图tcache_bin更完整的图片是这样的:

image-20250121221922548

tcache_bin的malloc和free过程

  • tcache_bin中有两个相关的操作,分别是tcache_get()tcache_put(),其中tcache_get()是将被free后,并且放入tcache_bin中的chunk给拿出来再次使用。而tcache_put()是将free后的chunk放入tcache_bin的函数。
  • 这个是tcache_get的过程
    • 在调用tcache_get之前,先会计算我们所申请堆块大小对应的tcache_bin的索引值,并在调用tcache_get的时候做为参数传入进去。
    • 然后取对应数组索引里面的数据(即指向chunk的指针,如果没有chunk,里面默认保存0
    • 先对tc_idx进行判断,然后再对数组索引里面的数据进行判断
    • 之后就是取出操作,并且改变counts[tc_idx]中的值(这里面存储着每个索引值所拥有的chunk链表数)
1
2
3
4
5
6
7
8
9
tcache_get (size_t tc_idx)
{
tcache_entry *e = tcache->entries[tc_idx];
assert (tc_idx < TCACHE_MAX_BINS);
assert (tcache->entries[tc_idx] > 0);
tcache->entries[tc_idx] = e->next;
--(tcache->counts[tc_idx]);
return (void *) e;
}
  • 接下来就是介绍tcache_puts的过程:
    • 这个函数会传入要释放的堆块,和这个堆块对应的tc_idx(即索引值)
    • 使用一个指针,指向要释放的堆块
    • tc_idx进行一个判断
    • 之后就是一个头插法加入链表的过程
    • 并且更新counts[tc_idx]里面对应的链表个数
1
2
3
4
5
6
7
8
9
static void
tcache_put (mchunkptr chunk, size_t tc_idx)
{
tcache_entry *e = (tcache_entry *) chunk2mem (chunk);
assert (tc_idx < TCACHE_MAX_BINS);
e->next = tcache->entries[tc_idx];
tcache->entries[tc_idx] = e;
++(tcache->counts[tc_idx]);
}

实验

  • 这三个实验只是用来真实体会一下tcache的链表结构,本实验是在docker中的ubuntu18.04环境中进行的对应的glibc2.27

实验1

  • 体会每个相同的chunk最多只能组成7个结点的链表,链在tchace_bin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#include<stdio.h>
#include<stdlib.h>
int main(void)
{
int* p[20];
malloc(0x1);
p[0]=malloc(0x10);
p[1]=malloc(0x10);
p[2]=malloc(0x10);
p[3]=malloc(0x10);
p[4]=malloc(0x10);
p[5]=malloc(0x10);
p[6]=malloc(0x10);
p[7]=malloc(0x10);
p[8]=malloc(0x10);
p[9]=malloc(0x10);
p[10]=malloc(0x20);
p[11]=malloc(0x20);
p[12]=malloc(0x20);
p[13]=malloc(0x20);
p[14]=malloc(0x20);
p[15]=malloc(0x20);
p[16]=malloc(0x20);
p[17]=malloc(0x20);
p[18]=malloc(0x20);
p[19]=malloc(0x20);
free(p[0]);
free(p[1]);
free(p[2]);
free(p[3]);
free(p[4]);
free(p[5]);
free(p[6]);
free(p[7]);
free(p[8]);
free(p[9]);
free(p[10]);
free(p[11]);
free(p[12]);
free(p[13]);
free(p[14]);
free(p[15]);
free(p[16]);
free(p[17]);
return 0;
}
# gcc -o test1 test1.c
  • 使用gdb进行动态调试,使用ni命令将malloc(0x20)都执行完,在执行第一个free之前先查看一下tcache_bins,目前tcache_bins还是空的

image-20250121222748346

  • 然后释放一个堆块,查看tcache_bins,发现被释放的堆块已经放入了tcache_bins

image-20250121222915304

  • 之后再释放6个堆块,使得tcachebins中管理0x20大小的堆块链满7个chunk

image-20250121223100752

  • 现在再释放一个堆块,查看tcache_binfast_bin,就会发现第8size为0x20的堆块被放入了fast_bin

image-20250121223232732

实验2

  • 实验2将体会tcache_bins最大能管理多大的chunk
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#include<stdio.h>
#include<stdlib.h>
int main(void)
{
int* p[20];
malloc(0x1);
p[0]=malloc(0x10);
p[1]=malloc(0x20);
p[2]=malloc(0x30);
p[3]=malloc(0x40);
p[4]=malloc(0x50);
p[5]=malloc(0x60);
p[6]=malloc(0x70);
p[7]=malloc(0x80);
p[8]=malloc(0x90);
p[9]=malloc(0x100);
p[10]=malloc(0x110);
p[11]=malloc(0x120);
p[12]=malloc(0x130);
p[13]=malloc(0x400);
p[14]=malloc(0x410);
p[15]=malloc(0x420);

free(p[12]);
free(p[13]);
free(p[14]);
return 0;
}
# gcc -o test2 test2.c
  • 先将程序执行到free之前,查看fast_bintcache_binunsorted_bin

image-20250121224008428

  • 执行了一次free后我们先来查看堆块,在查看一下tcachebin

image-20250121224424931

  • 之后继续执行二次free,再查看堆块和tcachebin以及unsortedbin,这时就会发现tcachebin所能管理的最大size的chunk为0x410

image-20250121224611538

相关源码

  • 这是glibc2.26tcache的相关源码
宏定义相关
  • 这边定义了Tcache,在glibc2.26malloc.c的第302行到第324行
    • 在第3行定义了TCACHE_MAX_BINS64
    • 在第22行代码中声明每个Tcache_bin中管理的链表最大长度为7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#if USE_TCACHE
/* We want 64 entries. This is an arbitrary limit, which tunables can reduce. */
# define TCACHE_MAX_BINS 64
# define MAX_TCACHE_SIZE tidx2usize (TCACHE_MAX_BINS-1)

/* Only used to pre-fill the tunables. */
# define tidx2usize(idx) (((size_t) idx) * MALLOC_ALIGNMENT + MINSIZE - SIZE_SZ)

/* When "x" is from chunksize(). */
# define csize2tidx(x) (((x) - MINSIZE + MALLOC_ALIGNMENT - 1) / MALLOC_ALIGNMENT)
/* When "x" is a user-provided size. */
# define usize2tidx(x) csize2tidx (request2size (x))

/* With rounding and alignment, the bins are...
idx 0 bytes 0..24 (64-bit) or 0..12 (32-bit)
idx 1 bytes 25..40 or 13..20
idx 2 bytes 41..56 or 21..28
etc. */

/* This is another arbitrary limit, which tunables can change. Each
tcache bin will hold at most this number of chunks. */
# define TCACHE_FILL_COUNT 7
#endif
tcache_bin相关操作
  • glibc2.26malloc.c的第2921行到第3040行,这些代码定义了tcache的相关操作
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
/*------------------------ Public wrappers. --------------------------------*/

#if USE_TCACHE

/* We overlay this structure on the user-data portion of a chunk when
the chunk is stored in the per-thread cache. */
typedef struct tcache_entry
{
struct tcache_entry *next;
} tcache_entry;

/* There is one of these for each thread, which contains the
per-thread cache (hence "tcache_perthread_struct"). Keeping
overall size low is mildly important. Note that COUNTS and ENTRIES
are redundant (we could have just counted the linked list each
time), this is for performance reasons. */
typedef struct tcache_perthread_struct
{
char counts[TCACHE_MAX_BINS];
tcache_entry *entries[TCACHE_MAX_BINS];
} tcache_perthread_struct;

static __thread char tcache_shutting_down = 0;
static __thread tcache_perthread_struct *tcache = NULL;

/* Caller must ensure that we know tc_idx is valid and there's room
for more chunks. */
static void
tcache_put (mchunkptr chunk, size_t tc_idx)
{
tcache_entry *e = (tcache_entry *) chunk2mem (chunk);
assert (tc_idx < TCACHE_MAX_BINS);
e->next = tcache->entries[tc_idx];
tcache->entries[tc_idx] = e;
++(tcache->counts[tc_idx]);
}

/* Caller must ensure that we know tc_idx is valid and there's
available chunks to remove. */
static void *
tcache_get (size_t tc_idx)
{
tcache_entry *e = tcache->entries[tc_idx];
assert (tc_idx < TCACHE_MAX_BINS);
assert (tcache->entries[tc_idx] > 0);
tcache->entries[tc_idx] = e->next;
--(tcache->counts[tc_idx]);
return (void *) e;
}

static void __attribute__ ((section ("__libc_thread_freeres_fn")))
tcache_thread_freeres (void)
{
int i;
tcache_perthread_struct *tcache_tmp = tcache;

if (!tcache)
return;

tcache = NULL;

for (i = 0; i < TCACHE_MAX_BINS; ++i)
{
while (tcache_tmp->entries[i])
{
tcache_entry *e = tcache_tmp->entries[i];
tcache_tmp->entries[i] = e->next;
__libc_free (e);
}
}

__libc_free (tcache_tmp);

tcache_shutting_down = 1;
}
text_set_element (__libc_thread_subfreeres, tcache_thread_freeres);

static void
tcache_init(void)
{
mstate ar_ptr;
void *victim = 0;
const size_t bytes = sizeof (tcache_perthread_struct);

if (tcache_shutting_down)
return;

arena_get (ar_ptr, bytes);
victim = _int_malloc (ar_ptr, bytes);
if (!victim && ar_ptr != NULL)
{
ar_ptr = arena_get_retry (ar_ptr, bytes);
victim = _int_malloc (ar_ptr, bytes);
}


if (ar_ptr != NULL)
__libc_lock_unlock (ar_ptr->mutex);

/* In a low memory situation, we may not be able to allocate memory
- in which case, we just keep trying later. However, we
typically do this very early, so either there is sufficient
memory, or there isn't enough memory to do non-trivial
allocations anyway. */
if (victim)
{
tcache = (tcache_perthread_struct *) victim;
memset (tcache, 0, sizeof (tcache_perthread_struct));
}

}

#define MAYBE_INIT_TCACHE() \
if (__glibc_unlikely (tcache == NULL)) \
tcache_init();

#else
#define MAYBE_INIT_TCACHE()
#endif
  • 通过查看malloc.c的源码,可以看到在__libc_malloc_int_malloc_int_free都有看到与tcache_bin相关的操作,接下来摘入的是,这几个函数中与tcache_bin操作相关的代码
__libc_malloc中tcache_bin相关代码
  • 该代码位于glibc2.26malloc.c3051行到第3067行,位于__libc_malloc函数内
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#if USE_TCACHE
/* int_free also calls request2size, be careful to not pad twice. */
size_t tbytes = request2size (bytes);
size_t tc_idx = csize2tidx (tbytes);

MAYBE_INIT_TCACHE ();

DIAG_PUSH_NEEDS_COMMENT;
if (tc_idx < mp_.tcache_bins
/*&& tc_idx < TCACHE_MAX_BINS*/ /* to appease gcc */
&& tcache
&& tcache->entries[tc_idx] != NULL)
{
return tcache_get (tc_idx);
}
DIAG_POP_NEEDS_COMMENT;
#endif
_int_malloc中tcache_bin相关代码
  • 该代码位于glibc2.26malloc.c3585行到第3604行,位于_int_malloc函数内
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#if USE_TCACHE
/* While we're here, if we see other chunks of the same size,
stash them in the tcache. */
size_t tc_idx = csize2tidx (nb);
if (tcache && tc_idx < mp_.tcache_bins)
{
mchunkptr tc_victim;

/* While bin not empty and tcache not full, copy chunks over. */
while (tcache->counts[tc_idx] < mp_.tcache_count
&& (pp = *fb) != NULL)
{
REMOVE_FB (fb, tc_victim, pp);
if (tc_victim != 0)
{
tcache_put (tc_victim, tc_idx);
}
}
}
#endif
  • 该代码位于glibc2.26malloc.c3643行到第3668行,位于_int_malloc函数内
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#if USE_TCACHE
/* While we're here, if we see other chunks of the same size,
stash them in the tcache. */
size_t tc_idx = csize2tidx (nb);
if (tcache && tc_idx < mp_.tcache_bins)
{
mchunkptr tc_victim;

/* While bin not empty and tcache not full, copy chunks over. */
while (tcache->counts[tc_idx] < mp_.tcache_count
&& (tc_victim = last (bin)) != bin)
{
if (tc_victim != 0)
{
bck = tc_victim->bk;
set_inuse_bit_at_offset (tc_victim, nb);
if (av != &main_arena)
set_non_main_arena (tc_victim);
bin->bk = bck;
bck->fd = bin;

tcache_put (tc_victim, tc_idx);
}
}
}
#endif
  • 该代码位于glibc2.26malloc.c3707行到第3715行,位于_int_malloc函数内
1
2
3
4
5
6
7
8
9
#if USE_TCACHE
INTERNAL_SIZE_T tcache_nb = 0;
size_t tc_idx = csize2tidx (nb);
if (tcache && tc_idx < mp_.tcache_bins)
tcache_nb = nb;
int return_cached = 0;

tcache_unsorted_count = 0;
#endif
  • 该代码位于glibc2.26malloc.c3777行到第3796行,位于_int_malloc函数内
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#if USE_TCACHE
/* Fill cache first, return to user only if cache fills.
We may return one of these chunks later. */
if (tcache_nb
&& tcache->counts[tc_idx] < mp_.tcache_count)
{
tcache_put (victim, tc_idx);
return_cached = 1;
continue;
}
else
{
#endif
check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
#if USE_TCACHE
}
#endif
_int_free中tcache_bin相关代码
  • 该代码位于glibc2.26malloc.c4173行到第4185行,位于_int_free函数内
1
2
3
4
5
6
7
8
9
10
11
12
13
#if USE_TCACHE
{
size_t tc_idx = csize2tidx (size);

if (tcache
&& tc_idx < mp_.tcache_bins
&& tcache->counts[tc_idx] < mp_.tcache_count)
{
tcache_put (p, tc_idx);
return;
}
}
#endif
  • 这是glibc2.33tcache的相关源码,大部分源码都和glibc2.26差不多,但是增加了一个将fd指针加密的操作。好像是2.32就增加了加密机制。
tcache_bin宏定义相关代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#if USE_TCACHE
/* We want 64 entries. This is an arbitrary limit, which tunables can reduce. */
# define TCACHE_MAX_BINS 64
# define MAX_TCACHE_SIZE tidx2usize (TCACHE_MAX_BINS-1)

/* Only used to pre-fill the tunables. */
# define tidx2usize(idx) (((size_t) idx) * MALLOC_ALIGNMENT + MINSIZE - SIZE_SZ)

/* When "x" is from chunksize(). */
# define csize2tidx(x) (((x) - MINSIZE + MALLOC_ALIGNMENT - 1) / MALLOC_ALIGNMENT)
/* When "x" is a user-provided size. */
# define usize2tidx(x) csize2tidx (request2size (x))

/* With rounding and alignment, the bins are...
idx 0 bytes 0..24 (64-bit) or 0..12 (32-bit)
idx 1 bytes 25..40 or 13..20
idx 2 bytes 41..56 or 21..28
etc. */

/* This is another arbitrary limit, which tunables can change. Each
tcache bin will hold at most this number of chunks. */
# define TCACHE_FILL_COUNT 7

/* Maximum chunks in tcache bins for tunables. This value must fit the range
of tcache->counts[] entries, else they may overflow. */
# define MAX_TCACHE_COUNT UINT16_MAX
#endif
tcache_bin操作相关代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136

#if USE_TCACHE

/* We overlay this structure on the user-data portion of a chunk when
the chunk is stored in the per-thread cache. */
typedef struct tcache_entry
{
struct tcache_entry *next;
/* This field exists to detect double frees. */
struct tcache_perthread_struct *key;
} tcache_entry;

/* There is one of these for each thread, which contains the
per-thread cache (hence "tcache_perthread_struct"). Keeping
overall size low is mildly important. Note that COUNTS and ENTRIES
are redundant (we could have just counted the linked list each
time), this is for performance reasons. */
typedef struct tcache_perthread_struct
{
uint16_t counts[TCACHE_MAX_BINS];
tcache_entry *entries[TCACHE_MAX_BINS];
} tcache_perthread_struct;

static __thread bool tcache_shutting_down = false;
static __thread tcache_perthread_struct *tcache = NULL;

/* Caller must ensure that we know tc_idx is valid and there's room
for more chunks. */
static __always_inline void
tcache_put (mchunkptr chunk, size_t tc_idx)
{
tcache_entry *e = (tcache_entry *) chunk2mem (chunk);

/* Mark this chunk as "in the tcache" so the test in _int_free will
detect a double free. */
e->key = tcache;

e->next = PROTECT_PTR (&e->next, tcache->entries[tc_idx]);
tcache->entries[tc_idx] = e;
++(tcache->counts[tc_idx]);
}

/* Caller must ensure that we know tc_idx is valid and there's
available chunks to remove. */
static __always_inline void *
tcache_get (size_t tc_idx)
{
tcache_entry *e = tcache->entries[tc_idx];
if (__glibc_unlikely (!aligned_OK (e)))
malloc_printerr ("malloc(): unaligned tcache chunk detected");
tcache->entries[tc_idx] = REVEAL_PTR (e->next);
--(tcache->counts[tc_idx]);
e->key = NULL;
return (void *) e;
}

static void
tcache_thread_shutdown (void)
{
int i;
tcache_perthread_struct *tcache_tmp = tcache;

if (!tcache)
return;

/* Disable the tcache and prevent it from being reinitialized. */
tcache = NULL;
tcache_shutting_down = true;

/* Free all of the entries and the tcache itself back to the arena
heap for coalescing. */
for (i = 0; i < TCACHE_MAX_BINS; ++i)
{
while (tcache_tmp->entries[i])
{
tcache_entry *e = tcache_tmp->entries[i];
if (__glibc_unlikely (!aligned_OK (e)))
malloc_printerr ("tcache_thread_shutdown(): "
"unaligned tcache chunk detected");
tcache_tmp->entries[i] = REVEAL_PTR (e->next);
__libc_free (e);
}
}

__libc_free (tcache_tmp);
}

static void
tcache_init(void)
{
mstate ar_ptr;
void *victim = 0;
const size_t bytes = sizeof (tcache_perthread_struct);

if (tcache_shutting_down)
return;

arena_get (ar_ptr, bytes);
victim = _int_malloc (ar_ptr, bytes);
if (!victim && ar_ptr != NULL)
{
ar_ptr = arena_get_retry (ar_ptr, bytes);
victim = _int_malloc (ar_ptr, bytes);
}


if (ar_ptr != NULL)
__libc_lock_unlock (ar_ptr->mutex);

/* In a low memory situation, we may not be able to allocate memory
- in which case, we just keep trying later. However, we
typically do this very early, so either there is sufficient
memory, or there isn't enough memory to do non-trivial
allocations anyway. */
if (victim)
{
tcache = (tcache_perthread_struct *) victim;
memset (tcache, 0, sizeof (tcache_perthread_struct));
}

}

# define MAYBE_INIT_TCACHE() \
if (__glibc_unlikely (tcache == NULL)) \
tcache_init();

#else /* !USE_TCACHE */
# define MAYBE_INIT_TCACHE()

static void
tcache_thread_shutdown (void)
{
/* Nothing to do if there is no thread cache. */
}

#endif /* !USE_TCACHE */
__libc_malloc中tcache相关代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#if USE_TCACHE
/* int_free also calls request2size, be careful to not pad twice. */
size_t tbytes;
if (!checked_request2size (bytes, &tbytes))
{
__set_errno (ENOMEM);
return NULL;
}
size_t tc_idx = csize2tidx (tbytes);

MAYBE_INIT_TCACHE ();

DIAG_PUSH_NEEDS_COMMENT;
if (tc_idx < mp_.tcache_bins
&& tcache
&& tcache->counts[tc_idx] > 0)
{
victim = tcache_get (tc_idx);
return TAG_NEW_USABLE (victim);
}
DIAG_POP_NEEDS_COMMENT;
#endif
_int_malloc中tcache_bin相关代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
#if USE_TCACHE
/* While we're here, if we see other chunks of the same size,
stash them in the tcache. */
size_t tc_idx = csize2tidx (nb);
if (tcache && tc_idx < mp_.tcache_bins)
{
mchunkptr tc_victim;

/* While bin not empty and tcache not full, copy chunks. */
while (tcache->counts[tc_idx] < mp_.tcache_count
&& (tc_victim = *fb) != NULL)
{
if (__glibc_unlikely (misaligned_chunk (tc_victim)))
malloc_printerr ("malloc(): unaligned fastbin chunk detected 3");
if (SINGLE_THREAD_P)
*fb = REVEAL_PTR (tc_victim->fd);
else
{
REMOVE_FB (fb, pp, tc_victim);
if (__glibc_unlikely (tc_victim == NULL))
break;
}
tcache_put (tc_victim, tc_idx);
}
}
#endif



#if USE_TCACHE
/* While we're here, if we see other chunks of the same size,
stash them in the tcache. */
size_t tc_idx = csize2tidx (nb);
if (tcache && tc_idx < mp_.tcache_bins)
{
mchunkptr tc_victim;

/* While bin not empty and tcache not full, copy chunks over. */
while (tcache->counts[tc_idx] < mp_.tcache_count
&& (tc_victim = last (bin)) != bin)
{
if (tc_victim != 0)
{
bck = tc_victim->bk;
set_inuse_bit_at_offset (tc_victim, nb);
if (av != &main_arena)
set_non_main_arena (tc_victim);
bin->bk = bck;
bck->fd = bin;

tcache_put (tc_victim, tc_idx);
}
}
}
#endif



#if USE_TCACHE
INTERNAL_SIZE_T tcache_nb = 0;
size_t tc_idx = csize2tidx (nb);
if (tcache && tc_idx < mp_.tcache_bins)
tcache_nb = nb;
int return_cached = 0;

tcache_unsorted_count = 0;
#endif




#if USE_TCACHE
/* Fill cache first, return to user only if cache fills.
We may return one of these chunks later. */
if (tcache_nb
&& tcache->counts[tc_idx] < mp_.tcache_count)
{
tcache_put (victim, tc_idx);
return_cached = 1;
continue;
}
else
{
#endif
check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
#if USE_TCACHE
}
#endif

int_free中tcache_bin检查key相关代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#if USE_TCACHE
{
size_t tc_idx = csize2tidx (size);
if (tcache != NULL && tc_idx < mp_.tcache_bins)
{
/* Check to see if it's already in the tcache. */
tcache_entry *e = (tcache_entry *) chunk2mem (p);

/* This test succeeds on double free. However, we don't 100%
trust it (it also matches random payload data at a 1 in
2^<size_t> chance), so verify it's not an unlikely
coincidence before aborting. */
if (__glibc_unlikely (e->key == tcache))
{
tcache_entry *tmp;
size_t cnt = 0;
LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
for (tmp = tcache->entries[tc_idx];
tmp;
tmp = REVEAL_PTR (tmp->next), ++cnt)
{
if (cnt >= mp_.tcache_count)
malloc_printerr ("free(): too many chunks detected in tcache");
if (__glibc_unlikely (!aligned_OK (tmp)))
malloc_printerr ("free(): unaligned chunk detected in tcache 2");
if (tmp == e)
malloc_printerr ("free(): double free detected in tcache 2");
/* If we get here, it was a coincidence. We've wasted a
few cycles, but don't abort. */
}
}

if (tcache->counts[tc_idx] < mp_.tcache_count)
{
tcache_put (p, tc_idx);
return;
}
}
}
#endif