Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

IDTxl.py 20 KB

You have to be logged in to leave a comment. Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
  1. #!/usr/bin/env python
  2. # coding: utf-8
  3. # # IDTxl基本使用教程
  4. #
  5. # 安装和依赖请查看:
  6. #
  7. # [Installation and Requirements](https://github.com/pwollstadt/IDTxl/wiki/Installation-and-Requirements)
  8. # In[1]:
  9. get_ipython().run_line_magic('reload_ext', 'autoreload')
  10. get_ipython().run_line_magic('autoreload', '2')
  11. get_ipython().run_line_magic('matplotlib', 'inline')
  12. # ## 使用IDTxl做第一个网络推断
  13. # In[2]:
  14. # %load test_first_example.py
  15. # Import classes
  16. from idtxl.multivariate_te import MultivariateTE
  17. from idtxl.data import Data
  18. from idtxl.visualise_graph import plot_network
  19. import matplotlib.pyplot as plt
  20. # a) Generate test data
  21. data = Data()
  22. data.generate_mute_data(n_samples=1000, n_replications=1)
  23. # b) Initialise analysis object and define settings
  24. network_analysis = MultivariateTE()
  25. settings = {'cmi_estimator': 'JidtGaussianCMI',
  26. 'max_lag_sources': 5,
  27. 'min_lag_sources': 1}
  28. # c) Run analysis
  29. results = network_analysis.analyse_network(settings=settings, data=data)
  30. # d) Plot inferred network to console and via matplotlib
  31. results.print_edge_list(weights='max_te_lag', fdr=False)
  32. plot_network(results=results, weights='max_te_lag', fdr=False)
  33. plt.show()
  34. # ## 理论介绍
  35. #
  36. # 为从数据中得到有用的结果,我们需要对算法的理论有一个初步的了解
  37. # 为什么使用多变量转移熵呢?
  38. # 转移熵最早由Schreiber在2000年提出,它是一种度量两个随机过程之间信息流动的有向估计方法。然而系统中往往是包含多个变量的,此时二元转移熵可能会产生一些错误的结果:
  39. #
  40. # 1. Spurious or redundant interactions
  41. # 2. Synergistic interactions
  42. #
  43. # 于是,我们需要一种可以考虑多个变量之间相互作用的方法。
  44. #
  45. # IDTxl实现了Lizier在2012年提出的一个贪婪迭代算法,有如下好处:
  46. #
  47. # 1. The algorithm infers all relevant sources of a target by iteratively including variables from a source's past that maximise a conditionsl mutual information criterion.
  48. # 2. IDTxl builds the set of parent sources for each target node in the network. This iterative conditioning is designed to both remove redundancies and capture synergistic interactions in building each parent set, thus addressing the two aforementioned shortcomings of bivariate analysis.
  49. # 3. THe inclusion of source variables requires repeated statistical testing of each investigated past variable in each iteration. IDTxl handles this multiple testing, in particular the family-wise error rate(FWER), by implementing a so-called maximum statistic test.
  50. # 4. Further statistical tests are aimed at pruning the selected parent set and providing control of the FMER slao at the network level.
  51. # 5. At the same time, IDTcl automates the optimization of parameters necessary for the estimation of mTE.
  52. #
  53. # 前四步就是2012文章里的算法所说,最后一个在哪里,这个很重要!!
  54. # 关于更详细的介绍,可以去读2012年那篇文章,或者查看 [Theoretical Introduction](https://github.com/pwollstadt/IDTxl/wiki/Theoretical-Introduction)
  55. # ## 两个主要应用
  56. # 1. Network inference
  57. # 2. Node dynamics
  58. # ## 网络推断的几种方法
  59. # ### Multivariate Transfer Entropy / Granger causality
  60. # In[3]:
  61. # %load ./IDTxl/demos/demo_multivariate_te.py
  62. # Import classes
  63. from idtxl.multivariate_te import MultivariateTE
  64. from idtxl.data import Data
  65. from idtxl.visualise_graph import plot_network
  66. import matplotlib.pyplot as plt
  67. # a) Generate test data
  68. data = Data()
  69. data.generate_mute_data(n_samples=1000, n_replications=5)
  70. # b) Initialise analysis object and define settings
  71. network_analysis = MultivariateTE()
  72. settings = {'cmi_estimator': 'JidtGaussianCMI',
  73. 'max_lag_sources': 5,
  74. 'min_lag_sources': 1}
  75. # c) Run analysis
  76. results = network_analysis.analyse_network(settings=settings, data=data)
  77. # d) Plot inferred network to console and via matplotlib
  78. results.print_edge_list(weights='max_te_lag', fdr=False)
  79. plot_network(results=results, weights='max_te_lag', fdr=False)
  80. plt.show()
  81. # In[4]:
  82. # a) Generate test data
  83. data = Data()
  84. data.generate_mute_data(n_samples=1000, n_replications=5)
  85. # In[5]:
  86. data.data.shape
  87. # In[6]:
  88. # b) Initialise analysis object and define settings
  89. network_analysis = MultivariateTE()
  90. settings = {'cmi_estimator': 'JidtGaussianCMI',
  91. 'max_lag_sources': 16,
  92. 'min_lag_sources': 1}
  93. # c) Run analysis
  94. results = network_analysis.analyse_single_target(settings=settings,
  95. data=data,
  96. target=0,
  97. sources=[1, 3])
  98. # In[7]:
  99. results.print_edge_list(weights='max_te_lag', fdr=False)
  100. plot_network(results=results, weights='max_te_lag', fdr=False)
  101. plt.show()
  102. # ### Bivariate TE / Granger causality
  103. # In[8]:
  104. # %load ./IDTxl/demos/demo_bivariate_te.py
  105. # Import classes
  106. from idtxl.bivariate_te import BivariateTE
  107. from idtxl.data import Data
  108. from idtxl.visualise_graph import plot_network
  109. import matplotlib.pyplot as plt
  110. # a) Generate test data
  111. data = Data()
  112. data.generate_mute_data(n_samples=1000, n_replications=5)
  113. # b) Initialise analysis object and define settings
  114. network_analysis = BivariateTE()
  115. settings = {'cmi_estimator': 'JidtGaussianCMI',
  116. 'max_lag_sources': 5,
  117. 'min_lag_sources': 1}
  118. # c) Run analysis
  119. results = network_analysis.analyse_network(settings=settings, data=data)
  120. # d) Plot inferred network to console and via matplotlib
  121. results.print_edge_list(weights='max_te_lag', fdr=False)
  122. plot_network(results=results, weights='max_te_lag', fdr=False)
  123. plt.show()
  124. # ### Multivariate mutual information
  125. # In[9]:
  126. # %load ./IDTxl/demos/demo_multivariate_mi.py
  127. # Import classes
  128. from idtxl.multivariate_mi import MultivariateMI
  129. from idtxl.data import Data
  130. from idtxl.visualise_graph import plot_network
  131. import matplotlib.pyplot as plt
  132. # a) Generate test data
  133. data = Data()
  134. data.generate_mute_data(n_samples=1000, n_replications=5)
  135. # b) Initialise analysis object and define settings
  136. network_analysis = MultivariateMI()
  137. settings = {'cmi_estimator': 'JidtGaussianCMI',
  138. 'max_lag_sources': 5,
  139. 'min_lag_sources': 1}
  140. # c) Run analysis
  141. results = network_analysis.analyse_network(settings=settings, data=data)
  142. # d) Plot inferred network to console and via matplotlib
  143. results.print_edge_list(weights='max_te_lag', fdr=False)
  144. plot_network(results=results, weights='max_te_lag', fdr=False)
  145. plt.show()
  146. # ### Bivariate mutual information
  147. # In[10]:
  148. # %load ./IDTxl/demos/demo_bivariate_mi.py
  149. # Import classes
  150. from idtxl.bivariate_mi import BivariateMI
  151. from idtxl.data import Data
  152. from idtxl.visualise_graph import plot_network
  153. import matplotlib.pyplot as plt
  154. # a) Generate test data
  155. data = Data()
  156. data.generate_mute_data(n_samples=1000, n_replications=5)
  157. # b) Initialise analysis object and define settings
  158. network_analysis = BivariateMI()
  159. settings = {'cmi_estimator': 'JidtGaussianCMI',
  160. 'max_lag_sources': 5,
  161. 'min_lag_sources': 1}
  162. # c) Run analysis
  163. results = network_analysis.analyse_network(settings=settings, data=data)
  164. # d) Plot inferred network to console and via matplotlib
  165. results.print_edge_list(weights='max_te_lag', fdr=False)
  166. plot_network(results=results, weights='max_te_lag', fdr=False)
  167. plt.show()
  168. # ## 网络推断实现细节
  169. # ### 数据处理
  170. # IDTxl uses its own class to handle data
  171. # In[11]:
  172. from idtxl.data import Data
  173. import numpy as np
  174. # The Data class holds up to 3D data, where
  175. #
  176. # 1. one dimension represents processes,
  177. # 2. one dimension represents samples,
  178. # 3. one dimension represents replications.
  179. #
  180. # For example, in a neuroscience setting, processes would correspond to EEG channels, samples to time steps, and replications to trials (repetitions of the same experiment).
  181. # Initialise Data Object
  182. # To import your own data set, create a Data object passing a 1D, 2D, or 3D numpy array. To specify which axis of the numpy array represents which dimension of the data, pass a one to three-letter string as `dim_order`, where 'p' stands for processes, 's' for samples, and 'r' for replications.
  183. # ** Example: 3D numpy array **
  184. # In[12]:
  185. # Initialise a data object holding data with 50000 samples,
  186. # 10 processes, and 5 replications.
  187. mydata = np.arange(50000).reshape((1000, 10, 5))
  188. data = Data(mydata, dim_order='spr')
  189. # ** Example: 2D numpy array **
  190. # In[13]:
  191. # Initialise a data object holding data with 5 processes and
  192. # 5000 samples
  193. mydata = np.arange(5000).reshape((5, 1000))
  194. dat = Data(mydata, dim_order='ps')
  195. # In[14]:
  196. mydata.shape
  197. # In[15]:
  198. dat.data.shape
  199. # ** Example: 1D numpy array **
  200. # In[16]:
  201. # Initialise a data object holding data with 1 process and
  202. # 1000 samples
  203. mydata = np.arange(1000)
  204. dat = Data(mydata, dim_order='s')
  205. # **Example: pandas data frame**
  206. # In[17]:
  207. import pandas as pd
  208. df = pd.DataFrame(
  209. {'col1': np.random.rand(100), 'col2': np.random.rand(100)})
  210. data = Data(df, dim_order='sp')
  211. # Adding data with properties: 2 processes, 100 samples, 1 replications
  212. # overwriting existing data
  213. # In[18]:
  214. data.data.shape
  215. # IDTxl also provides the functionality to [generate synthetic test data](https://github.com/pwollstadt/IDTxl/wiki/Synthetic-Test-Data)
  216. # ### 设置模型参数
  217. # #### 选择合适的CMI 参数
  218. # CMI估计方法的选取很重要,对不同的数据类型应该选择不同的参数,主要是在setting里面设置`cmi_estimator`这个参数。
  219. #
  220. # 主要数据类型:
  221. #
  222. # 1. discrete data
  223. # 2. jointly Gaussian continuous data
  224. # 3. non-linear continuous data
  225. #
  226. # 对于非线性的连续数据有两种估计选择:
  227. #
  228. # 1. `JidtKraskovCMI`
  229. # 2. `OpenCLKraskovCMI`
  230. # In[19]:
  231. import idtxl.estimators_jidt
  232. help(idtxl.estimators_jidt)
  233. # Different CMI estimators are available for discrete data, jointly Gaussian continuous data, and non-linear continuous data. For non-linear continuous data, two alternatives are available:
  234. # 1. A multithreaded Java-implementation (JidtKraskovCMI)
  235. # 2. An OpenCL-implementation (OpenCLKraskovCMI, to be run on GPUs).
  236. # In[20]:
  237. settings = {'cmi_estimator': 'JidtKraskovCMI'}
  238. # In[21]:
  239. import idtxl.estimators_opencl
  240. help(idtxl.estimators_opencl)
  241. # #### 设置Source and Target lags
  242. #
  243. # 由于计算资源有限,我们不可能考虑所有的时间延迟,所以可以设置一个延迟计算范围,甚至可以设置延迟计算间隔,仅仅计算有效的延迟,合理利用先验知识,具体就是在`setting`里面设置如下参数:
  244. #
  245. # `max_lag_sources`, `min_lag_sources`, `max_lag_target`, `min_lag_target`, `tau_sources`, `tau_target`
  246. #
  247. # 看情况待定吧,一般要记得设置 `max_lag_sources`,否则就会计算很多无用的东西。
  248. # In[22]:
  249. settings = {'max_lag_sources': 16,
  250. 'min_lag_sources': 6}
  251. # For transfer entropy estimation, additionally, a maximum lag for the target can be set. If the lag is not specified, it will be set to be equal to the maximum source lag.
  252. # The minimum target lag is always set to 1 sample, to ensure so-called self-prediction optimality (Wibral, 2013, PLOS 8(2)). This is further discussed in the Wiki's Theoretical Introduction.
  253. # In[23]:
  254. settings = {'max_lag_sources': 16,
  255. 'min_lag_sources': 6,
  256. 'max_lag_target': 9}
  257. # ### 重要性统计检验
  258. # The `stats` module provides the implementation of several statistical significance tests.
  259. # 在网格推断中主要有五个重要性统计检验:
  260. # 1. `Maximum test`
  261. # 2. `Minimum test`
  262. # 3. `Omnibus test`
  263. # 4. `Sequential maximum test`
  264. # 5. `FDR correction`
  265. #
  266. # 关于它们的详细介绍和完整参数设置,请查看源码或说明文档:[Documentation](https://pwollstadt.github.io/IDTxl/html/index.html), [stats section](https://github.com/pwollstadt/IDTxl/wiki/Theoretical-Introduction#statistical-tests-used-in-the-mte-algorithm)
  267. # 参数设置还是在`setting`里面,主要就是设置打乱次数和一个水平指标,这个指标代表什么含义呢?需要去查一下
  268. # In[24]:
  269. settings = {'n_perm_max_stat': 200,
  270. 'alpha_max_stat': 0.05,
  271. 'permute_in_time': False}
  272. # ### Conditioning on specific variables
  273. # 这个是为了对某些特定的变量做一个条件测试,设置方法如下所示,一般我们不会特别指明
  274. # In order to enforce the inclusion of specific variables in the condition set, add the `add_conditionals` key in the analysis settings dictionary and provide the list of variables to include. Each variable is defined as as tuple `(process index, lag)`, e.g. `(0,3)` corresponds to the variable in process 0 with a lag of 3 samples with respect to the target.
  275. # In[25]:
  276. settings = {'add_conditionals': [(0,3), (1,2)]}
  277. # By setting `add_conditionals` to `faes`, the algorithm adds the current value of all sources to the conditional set. This is meant to correct for instantaneous mixing in the source.
  278. # In[26]:
  279. settings = {'add_conditionals': 'faes'}
  280. # ### 分析过程
  281. # 参数设置好了,我们就可以开始进行分析了
  282. # #### 单个目标
  283. #
  284. # 主要就是使用`analyse_single_target`这个函数,可以设置`target`和`sources`。
  285. # In order to restrict the analysis to a specific target process in the system, run the `analyse_single_target` method and provide `target` argument in addition to the required `data` and `settings`.
  286. #
  287. # By default, when a target process is specified(e.g. process 0), all the other processes will be considered as potential sources.
  288. # In[27]:
  289. data = Data()
  290. data.generate_mute_data(n_samples=1000, n_replications=5)
  291. # In[28]:
  292. settings = {'cmi_estimator': 'JidtKraskovCMI',
  293. 'max_lag_sources': 5,
  294. 'min_lag_sources':1,
  295. 'permute_in_time': False,
  296. 'add_conditionals': [(0,3), (1,2)]}
  297. # `add_conditionals` 这个参数不能随便进行设置,刚才设置成`fae`的时候网络分析会报错
  298. # In[29]:
  299. results = network_analysis.analyse_single_target(settings=settings,
  300. data=data,
  301. target=0,
  302. sources=[1, 3])
  303. # 这个例子计算起来好慢啊
  304. # 这是什么错误呢?
  305. # 问题好像是出现在`add_conditionals`上面
  306. # #### 全网络分析
  307. # To analyse the whole network, use the `analyse_network` method and provide the required `data` and `setting` arguments.
  308. # In[ ]:
  309. results = network_analysis.analyse_network(settings=settings,
  310. data=data)
  311. # THe algorithm will loop over all processes in the network and first analyse each of them separately as a target. The results will finally be combined in a [Results object](https://github.com/pwollstadt/IDTxl/wiki/The-Results-Class). By default, an FDR correction will be performed when combining the results form different target[1995](http://www.math.tau.ac.il/~ybenja/MyPapers/benjamini_hochberg1995.pdf).
  312. #
  313. # In order to skip the FDR correction, refer to the tutorial on [Statistical Significance Tests](https://github.com/pwollstadt/IDTxl/wiki/Statistical-Significance-Tests).
  314. # 除此之外,我们还可以限制target variables 和source variables
  315. # #### The result class
  316. # IDTxl所有的计算结果保存在result当中,比如`ActiveInformationStorage()`,`MUltivaraiteTE()`,还有单目标及全网络分析的结果以及参数设置属性列表也都在这里,直接查看属性即可
  317. # 一些初步的应用:
  318. # 1. 使用`result.settings`查看参数设置
  319. # 2. 使用`results.print_edge_list(weights='max_te_lag', fdr=False)`打印出重要的信息流动和延迟
  320. # 3. 使用`get_adjacency_matrix()`可以查看关联关系矩阵
  321. # 4. 可以查看对某个单独目标的贡献
  322. # 5. 可以查看数据的性质
  323. # 6. 可以用来可视化
  324. # 还有一些高级功能,如`combine_results`和`PartialInformationDecomposition`等等,可以查看Documentation以及相关的论文,目前还用不到
  325. # ### 可视化网络分析
  326. # **下面我们来考虑下一个话题:Node dynamics !**
  327. # ## 节点动力学分析
  328. # ### Active Information Storage
  329. # 这个例子其实就是对于单个的随机过程来进行分析的,就是看看$X_n^{(k)}$对$X_{n+1}$的贡献,需要提前设置所要考察的最大的k值。
  330. # In[ ]:
  331. # %load ./IDTxl/demos/demo_active_information_storage.py
  332. # Import classes
  333. from idtxl.active_information_storage import ActiveInformationStorage
  334. from idtxl.data import Data
  335. # a) Generate test data
  336. data = Data()
  337. data.generate_mute_data(n_samples=1000, n_replications=5)
  338. # b) Initialise analysis object and define settings
  339. network_analysis = ActiveInformationStorage()
  340. settings = {'cmi_estimator': 'JidtGaussianCMI',
  341. 'max_lag': 5}
  342. # c) Run analysis
  343. results = network_analysis.analyse_network(settings=settings, data=data)
  344. # d) Plot list of processes with significant AIS to console
  345. print(results.get_significant_processes(fdr=False))
  346. # ### Partial Information Decomposition
  347. # 下面这个是一个异或问题的例子,输入是两个变量,输出是一个变量
  348. # In[ ]:
  349. # %load ./IDTxl/demos/demo_partial_information_decomposition.py
  350. # Import classes
  351. import numpy as np
  352. from idtxl.partial_information_decomposition import (
  353. PartialInformationDecomposition)
  354. from idtxl.data import Data
  355. # a) Generate test data
  356. n = 100
  357. alph = 2
  358. x = np.random.randint(0, alph, n)
  359. y = np.random.randint(0, alph, n)
  360. z = np.logical_xor(x, y).astype(int)
  361. data = Data(np.vstack((x, y, z)), 'ps', normalise=False)
  362. # b) Initialise analysis object and define settings for both PID estimators
  363. pid = PartialInformationDecomposition()
  364. settings_tartu = {'pid_estimator': 'TartuPID', 'lags_pid': [0, 0]}
  365. settings_sydney = {
  366. 'alph_s1': alph,
  367. 'alph_s2': alph,
  368. 'alph_t': alph,
  369. 'max_unsuc_swaps_row_parm': 60,
  370. 'num_reps': 63,
  371. 'max_iters': 1000,
  372. 'pid_estimator': 'SydneyPID',
  373. 'lags_pid': [0, 0]}
  374. # c) Run Tartu estimator
  375. results_tartu = pid.analyse_single_target(
  376. settings=settings_tartu, data=data, target=2, sources=[0, 1])
  377. # d) Run Sydney estimator
  378. pid = PartialInformationDecomposition()
  379. results_sydney = pid.analyse_single_target(
  380. settings=settings_sydney, data=data, target=2, sources=[0, 1])
  381. # e) Print results to console
  382. print('\nLogical XOR')
  383. print('Estimator Sydney\t\tTartu\t\tExpected\n')
  384. print('Uni s1 {0:.4f}\t\t{1:.4f}\t\t{2:.2f}'.format(
  385. results_sydney.get_single_target(2)['unq_s1'],
  386. results_tartu.get_single_target(2)['unq_s1'],
  387. 0))
  388. print('Uni s2 {0:.4f}\t\t{1:.4f}\t\t{2:.2f}'.format(
  389. results_sydney.get_single_target(2)['unq_s2'],
  390. results_tartu.get_single_target(2)['unq_s2'],
  391. 0))
  392. print('Shared s1_s2 {0:.4f}\t\t{1:.4f}\t\t{2:.2f}'.format(
  393. results_sydney.get_single_target(2)['shd_s1_s2'],
  394. results_tartu.get_single_target(2)['shd_s1_s2'],
  395. 0))
  396. print('Synergy s1_s2 {0:.4f}\t\t{1:.4f}\t\t{2:.2f}'.format(
  397. results_sydney.get_single_target(2)['syn_s1_s2'],
  398. results_tartu.get_single_target(2)['syn_s1_s2'],
  399. 1))
  400. # The result is actually interesting.
  401. #
  402. # 可以看到结果还是很不错的,0.9705已经很接近于期望值1喽。
  403. # 这个要去读一下相关的文献了:
  404. # 1. Non negative information decomposition
  405. # 2. Local information measure
  406. # ### 实现细节
  407. # 这部分内容的理论基础是:Local information Measure:具体可以查看2008,2010,2014年的那几篇文献,我们先来看看程序是什么样子~
  408. # 具体同网路推断差不多:数据处理,设置模型参数,选择合适的估计方法,进行重要性统计检验,结果分析等等,具体参考[Documentation](https://github.com/pwollstadt/IDTxl/wiki)
  409. # ## 注意事项
  410. # 1. 参数太多,设置起来要小心
  411. # In[ ]:
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...