ajithradnus commited on
Commit
0d7b16d
·
verified ·
1 Parent(s): c8c2df2

Upload 8 files

Browse files
Files changed (8) hide show
  1. .gitignore +5 -0
  2. LICENSE +674 -0
  3. __init__.py +39 -0
  4. install.bat +37 -0
  5. install.py +104 -0
  6. nodes.py +1237 -0
  7. reactor_utils.py +231 -0
  8. requirements.txt +7 -0
.gitignore ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ __pycache__/
2
+ *$py.class
3
+ .vscode/
4
+ example
5
+ input
LICENSE ADDED
@@ -0,0 +1,674 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GNU GENERAL PUBLIC LICENSE
2
+ Version 3, 29 June 2007
3
+
4
+ Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
5
+ Everyone is permitted to copy and distribute verbatim copies
6
+ of this license document, but changing it is not allowed.
7
+
8
+ Preamble
9
+
10
+ The GNU General Public License is a free, copyleft license for
11
+ software and other kinds of works.
12
+
13
+ The licenses for most software and other practical works are designed
14
+ to take away your freedom to share and change the works. By contrast,
15
+ the GNU General Public License is intended to guarantee your freedom to
16
+ share and change all versions of a program--to make sure it remains free
17
+ software for all its users. We, the Free Software Foundation, use the
18
+ GNU General Public License for most of our software; it applies also to
19
+ any other work released this way by its authors. You can apply it to
20
+ your programs, too.
21
+
22
+ When we speak of free software, we are referring to freedom, not
23
+ price. Our General Public Licenses are designed to make sure that you
24
+ have the freedom to distribute copies of free software (and charge for
25
+ them if you wish), that you receive source code or can get it if you
26
+ want it, that you can change the software or use pieces of it in new
27
+ free programs, and that you know you can do these things.
28
+
29
+ To protect your rights, we need to prevent others from denying you
30
+ these rights or asking you to surrender the rights. Therefore, you have
31
+ certain responsibilities if you distribute copies of the software, or if
32
+ you modify it: responsibilities to respect the freedom of others.
33
+
34
+ For example, if you distribute copies of such a program, whether
35
+ gratis or for a fee, you must pass on to the recipients the same
36
+ freedoms that you received. You must make sure that they, too, receive
37
+ or can get the source code. And you must show them these terms so they
38
+ know their rights.
39
+
40
+ Developers that use the GNU GPL protect your rights with two steps:
41
+ (1) assert copyright on the software, and (2) offer you this License
42
+ giving you legal permission to copy, distribute and/or modify it.
43
+
44
+ For the developers' and authors' protection, the GPL clearly explains
45
+ that there is no warranty for this free software. For both users' and
46
+ authors' sake, the GPL requires that modified versions be marked as
47
+ changed, so that their problems will not be attributed erroneously to
48
+ authors of previous versions.
49
+
50
+ Some devices are designed to deny users access to install or run
51
+ modified versions of the software inside them, although the manufacturer
52
+ can do so. This is fundamentally incompatible with the aim of
53
+ protecting users' freedom to change the software. The systematic
54
+ pattern of such abuse occurs in the area of products for individuals to
55
+ use, which is precisely where it is most unacceptable. Therefore, we
56
+ have designed this version of the GPL to prohibit the practice for those
57
+ products. If such problems arise substantially in other domains, we
58
+ stand ready to extend this provision to those domains in future versions
59
+ of the GPL, as needed to protect the freedom of users.
60
+
61
+ Finally, every program is threatened constantly by software patents.
62
+ States should not allow patents to restrict development and use of
63
+ software on general-purpose computers, but in those that do, we wish to
64
+ avoid the special danger that patents applied to a free program could
65
+ make it effectively proprietary. To prevent this, the GPL assures that
66
+ patents cannot be used to render the program non-free.
67
+
68
+ The precise terms and conditions for copying, distribution and
69
+ modification follow.
70
+
71
+ TERMS AND CONDITIONS
72
+
73
+ 0. Definitions.
74
+
75
+ "This License" refers to version 3 of the GNU General Public License.
76
+
77
+ "Copyright" also means copyright-like laws that apply to other kinds of
78
+ works, such as semiconductor masks.
79
+
80
+ "The Program" refers to any copyrightable work licensed under this
81
+ License. Each licensee is addressed as "you". "Licensees" and
82
+ "recipients" may be individuals or organizations.
83
+
84
+ To "modify" a work means to copy from or adapt all or part of the work
85
+ in a fashion requiring copyright permission, other than the making of an
86
+ exact copy. The resulting work is called a "modified version" of the
87
+ earlier work or a work "based on" the earlier work.
88
+
89
+ A "covered work" means either the unmodified Program or a work based
90
+ on the Program.
91
+
92
+ To "propagate" a work means to do anything with it that, without
93
+ permission, would make you directly or secondarily liable for
94
+ infringement under applicable copyright law, except executing it on a
95
+ computer or modifying a private copy. Propagation includes copying,
96
+ distribution (with or without modification), making available to the
97
+ public, and in some countries other activities as well.
98
+
99
+ To "convey" a work means any kind of propagation that enables other
100
+ parties to make or receive copies. Mere interaction with a user through
101
+ a computer network, with no transfer of a copy, is not conveying.
102
+
103
+ An interactive user interface displays "Appropriate Legal Notices"
104
+ to the extent that it includes a convenient and prominently visible
105
+ feature that (1) displays an appropriate copyright notice, and (2)
106
+ tells the user that there is no warranty for the work (except to the
107
+ extent that warranties are provided), that licensees may convey the
108
+ work under this License, and how to view a copy of this License. If
109
+ the interface presents a list of user commands or options, such as a
110
+ menu, a prominent item in the list meets this criterion.
111
+
112
+ 1. Source Code.
113
+
114
+ The "source code" for a work means the preferred form of the work
115
+ for making modifications to it. "Object code" means any non-source
116
+ form of a work.
117
+
118
+ A "Standard Interface" means an interface that either is an official
119
+ standard defined by a recognized standards body, or, in the case of
120
+ interfaces specified for a particular programming language, one that
121
+ is widely used among developers working in that language.
122
+
123
+ The "System Libraries" of an executable work include anything, other
124
+ than the work as a whole, that (a) is included in the normal form of
125
+ packaging a Major Component, but which is not part of that Major
126
+ Component, and (b) serves only to enable use of the work with that
127
+ Major Component, or to implement a Standard Interface for which an
128
+ implementation is available to the public in source code form. A
129
+ "Major Component", in this context, means a major essential component
130
+ (kernel, window system, and so on) of the specific operating system
131
+ (if any) on which the executable work runs, or a compiler used to
132
+ produce the work, or an object code interpreter used to run it.
133
+
134
+ The "Corresponding Source" for a work in object code form means all
135
+ the source code needed to generate, install, and (for an executable
136
+ work) run the object code and to modify the work, including scripts to
137
+ control those activities. However, it does not include the work's
138
+ System Libraries, or general-purpose tools or generally available free
139
+ programs which are used unmodified in performing those activities but
140
+ which are not part of the work. For example, Corresponding Source
141
+ includes interface definition files associated with source files for
142
+ the work, and the source code for shared libraries and dynamically
143
+ linked subprograms that the work is specifically designed to require,
144
+ such as by intimate data communication or control flow between those
145
+ subprograms and other parts of the work.
146
+
147
+ The Corresponding Source need not include anything that users
148
+ can regenerate automatically from other parts of the Corresponding
149
+ Source.
150
+
151
+ The Corresponding Source for a work in source code form is that
152
+ same work.
153
+
154
+ 2. Basic Permissions.
155
+
156
+ All rights granted under this License are granted for the term of
157
+ copyright on the Program, and are irrevocable provided the stated
158
+ conditions are met. This License explicitly affirms your unlimited
159
+ permission to run the unmodified Program. The output from running a
160
+ covered work is covered by this License only if the output, given its
161
+ content, constitutes a covered work. This License acknowledges your
162
+ rights of fair use or other equivalent, as provided by copyright law.
163
+
164
+ You may make, run and propagate covered works that you do not
165
+ convey, without conditions so long as your license otherwise remains
166
+ in force. You may convey covered works to others for the sole purpose
167
+ of having them make modifications exclusively for you, or provide you
168
+ with facilities for running those works, provided that you comply with
169
+ the terms of this License in conveying all material for which you do
170
+ not control copyright. Those thus making or running the covered works
171
+ for you must do so exclusively on your behalf, under your direction
172
+ and control, on terms that prohibit them from making any copies of
173
+ your copyrighted material outside their relationship with you.
174
+
175
+ Conveying under any other circumstances is permitted solely under
176
+ the conditions stated below. Sublicensing is not allowed; section 10
177
+ makes it unnecessary.
178
+
179
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
180
+
181
+ No covered work shall be deemed part of an effective technological
182
+ measure under any applicable law fulfilling obligations under article
183
+ 11 of the WIPO copyright treaty adopted on 20 December 1996, or
184
+ similar laws prohibiting or restricting circumvention of such
185
+ measures.
186
+
187
+ When you convey a covered work, you waive any legal power to forbid
188
+ circumvention of technological measures to the extent such circumvention
189
+ is effected by exercising rights under this License with respect to
190
+ the covered work, and you disclaim any intention to limit operation or
191
+ modification of the work as a means of enforcing, against the work's
192
+ users, your or third parties' legal rights to forbid circumvention of
193
+ technological measures.
194
+
195
+ 4. Conveying Verbatim Copies.
196
+
197
+ You may convey verbatim copies of the Program's source code as you
198
+ receive it, in any medium, provided that you conspicuously and
199
+ appropriately publish on each copy an appropriate copyright notice;
200
+ keep intact all notices stating that this License and any
201
+ non-permissive terms added in accord with section 7 apply to the code;
202
+ keep intact all notices of the absence of any warranty; and give all
203
+ recipients a copy of this License along with the Program.
204
+
205
+ You may charge any price or no price for each copy that you convey,
206
+ and you may offer support or warranty protection for a fee.
207
+
208
+ 5. Conveying Modified Source Versions.
209
+
210
+ You may convey a work based on the Program, or the modifications to
211
+ produce it from the Program, in the form of source code under the
212
+ terms of section 4, provided that you also meet all of these conditions:
213
+
214
+ a) The work must carry prominent notices stating that you modified
215
+ it, and giving a relevant date.
216
+
217
+ b) The work must carry prominent notices stating that it is
218
+ released under this License and any conditions added under section
219
+ 7. This requirement modifies the requirement in section 4 to
220
+ "keep intact all notices".
221
+
222
+ c) You must license the entire work, as a whole, under this
223
+ License to anyone who comes into possession of a copy. This
224
+ License will therefore apply, along with any applicable section 7
225
+ additional terms, to the whole of the work, and all its parts,
226
+ regardless of how they are packaged. This License gives no
227
+ permission to license the work in any other way, but it does not
228
+ invalidate such permission if you have separately received it.
229
+
230
+ d) If the work has interactive user interfaces, each must display
231
+ Appropriate Legal Notices; however, if the Program has interactive
232
+ interfaces that do not display Appropriate Legal Notices, your
233
+ work need not make them do so.
234
+
235
+ A compilation of a covered work with other separate and independent
236
+ works, which are not by their nature extensions of the covered work,
237
+ and which are not combined with it such as to form a larger program,
238
+ in or on a volume of a storage or distribution medium, is called an
239
+ "aggregate" if the compilation and its resulting copyright are not
240
+ used to limit the access or legal rights of the compilation's users
241
+ beyond what the individual works permit. Inclusion of a covered work
242
+ in an aggregate does not cause this License to apply to the other
243
+ parts of the aggregate.
244
+
245
+ 6. Conveying Non-Source Forms.
246
+
247
+ You may convey a covered work in object code form under the terms
248
+ of sections 4 and 5, provided that you also convey the
249
+ machine-readable Corresponding Source under the terms of this License,
250
+ in one of these ways:
251
+
252
+ a) Convey the object code in, or embodied in, a physical product
253
+ (including a physical distribution medium), accompanied by the
254
+ Corresponding Source fixed on a durable physical medium
255
+ customarily used for software interchange.
256
+
257
+ b) Convey the object code in, or embodied in, a physical product
258
+ (including a physical distribution medium), accompanied by a
259
+ written offer, valid for at least three years and valid for as
260
+ long as you offer spare parts or customer support for that product
261
+ model, to give anyone who possesses the object code either (1) a
262
+ copy of the Corresponding Source for all the software in the
263
+ product that is covered by this License, on a durable physical
264
+ medium customarily used for software interchange, for a price no
265
+ more than your reasonable cost of physically performing this
266
+ conveying of source, or (2) access to copy the
267
+ Corresponding Source from a network server at no charge.
268
+
269
+ c) Convey individual copies of the object code with a copy of the
270
+ written offer to provide the Corresponding Source. This
271
+ alternative is allowed only occasionally and noncommercially, and
272
+ only if you received the object code with such an offer, in accord
273
+ with subsection 6b.
274
+
275
+ d) Convey the object code by offering access from a designated
276
+ place (gratis or for a charge), and offer equivalent access to the
277
+ Corresponding Source in the same way through the same place at no
278
+ further charge. You need not require recipients to copy the
279
+ Corresponding Source along with the object code. If the place to
280
+ copy the object code is a network server, the Corresponding Source
281
+ may be on a different server (operated by you or a third party)
282
+ that supports equivalent copying facilities, provided you maintain
283
+ clear directions next to the object code saying where to find the
284
+ Corresponding Source. Regardless of what server hosts the
285
+ Corresponding Source, you remain obligated to ensure that it is
286
+ available for as long as needed to satisfy these requirements.
287
+
288
+ e) Convey the object code using peer-to-peer transmission, provided
289
+ you inform other peers where the object code and Corresponding
290
+ Source of the work are being offered to the general public at no
291
+ charge under subsection 6d.
292
+
293
+ A separable portion of the object code, whose source code is excluded
294
+ from the Corresponding Source as a System Library, need not be
295
+ included in conveying the object code work.
296
+
297
+ A "User Product" is either (1) a "consumer product", which means any
298
+ tangible personal property which is normally used for personal, family,
299
+ or household purposes, or (2) anything designed or sold for incorporation
300
+ into a dwelling. In determining whether a product is a consumer product,
301
+ doubtful cases shall be resolved in favor of coverage. For a particular
302
+ product received by a particular user, "normally used" refers to a
303
+ typical or common use of that class of product, regardless of the status
304
+ of the particular user or of the way in which the particular user
305
+ actually uses, or expects or is expected to use, the product. A product
306
+ is a consumer product regardless of whether the product has substantial
307
+ commercial, industrial or non-consumer uses, unless such uses represent
308
+ the only significant mode of use of the product.
309
+
310
+ "Installation Information" for a User Product means any methods,
311
+ procedures, authorization keys, or other information required to install
312
+ and execute modified versions of a covered work in that User Product from
313
+ a modified version of its Corresponding Source. The information must
314
+ suffice to ensure that the continued functioning of the modified object
315
+ code is in no case prevented or interfered with solely because
316
+ modification has been made.
317
+
318
+ If you convey an object code work under this section in, or with, or
319
+ specifically for use in, a User Product, and the conveying occurs as
320
+ part of a transaction in which the right of possession and use of the
321
+ User Product is transferred to the recipient in perpetuity or for a
322
+ fixed term (regardless of how the transaction is characterized), the
323
+ Corresponding Source conveyed under this section must be accompanied
324
+ by the Installation Information. But this requirement does not apply
325
+ if neither you nor any third party retains the ability to install
326
+ modified object code on the User Product (for example, the work has
327
+ been installed in ROM).
328
+
329
+ The requirement to provide Installation Information does not include a
330
+ requirement to continue to provide support service, warranty, or updates
331
+ for a work that has been modified or installed by the recipient, or for
332
+ the User Product in which it has been modified or installed. Access to a
333
+ network may be denied when the modification itself materially and
334
+ adversely affects the operation of the network or violates the rules and
335
+ protocols for communication across the network.
336
+
337
+ Corresponding Source conveyed, and Installation Information provided,
338
+ in accord with this section must be in a format that is publicly
339
+ documented (and with an implementation available to the public in
340
+ source code form), and must require no special password or key for
341
+ unpacking, reading or copying.
342
+
343
+ 7. Additional Terms.
344
+
345
+ "Additional permissions" are terms that supplement the terms of this
346
+ License by making exceptions from one or more of its conditions.
347
+ Additional permissions that are applicable to the entire Program shall
348
+ be treated as though they were included in this License, to the extent
349
+ that they are valid under applicable law. If additional permissions
350
+ apply only to part of the Program, that part may be used separately
351
+ under those permissions, but the entire Program remains governed by
352
+ this License without regard to the additional permissions.
353
+
354
+ When you convey a copy of a covered work, you may at your option
355
+ remove any additional permissions from that copy, or from any part of
356
+ it. (Additional permissions may be written to require their own
357
+ removal in certain cases when you modify the work.) You may place
358
+ additional permissions on material, added by you to a covered work,
359
+ for which you have or can give appropriate copyright permission.
360
+
361
+ Notwithstanding any other provision of this License, for material you
362
+ add to a covered work, you may (if authorized by the copyright holders of
363
+ that material) supplement the terms of this License with terms:
364
+
365
+ a) Disclaiming warranty or limiting liability differently from the
366
+ terms of sections 15 and 16 of this License; or
367
+
368
+ b) Requiring preservation of specified reasonable legal notices or
369
+ author attributions in that material or in the Appropriate Legal
370
+ Notices displayed by works containing it; or
371
+
372
+ c) Prohibiting misrepresentation of the origin of that material, or
373
+ requiring that modified versions of such material be marked in
374
+ reasonable ways as different from the original version; or
375
+
376
+ d) Limiting the use for publicity purposes of names of licensors or
377
+ authors of the material; or
378
+
379
+ e) Declining to grant rights under trademark law for use of some
380
+ trade names, trademarks, or service marks; or
381
+
382
+ f) Requiring indemnification of licensors and authors of that
383
+ material by anyone who conveys the material (or modified versions of
384
+ it) with contractual assumptions of liability to the recipient, for
385
+ any liability that these contractual assumptions directly impose on
386
+ those licensors and authors.
387
+
388
+ All other non-permissive additional terms are considered "further
389
+ restrictions" within the meaning of section 10. If the Program as you
390
+ received it, or any part of it, contains a notice stating that it is
391
+ governed by this License along with a term that is a further
392
+ restriction, you may remove that term. If a license document contains
393
+ a further restriction but permits relicensing or conveying under this
394
+ License, you may add to a covered work material governed by the terms
395
+ of that license document, provided that the further restriction does
396
+ not survive such relicensing or conveying.
397
+
398
+ If you add terms to a covered work in accord with this section, you
399
+ must place, in the relevant source files, a statement of the
400
+ additional terms that apply to those files, or a notice indicating
401
+ where to find the applicable terms.
402
+
403
+ Additional terms, permissive or non-permissive, may be stated in the
404
+ form of a separately written license, or stated as exceptions;
405
+ the above requirements apply either way.
406
+
407
+ 8. Termination.
408
+
409
+ You may not propagate or modify a covered work except as expressly
410
+ provided under this License. Any attempt otherwise to propagate or
411
+ modify it is void, and will automatically terminate your rights under
412
+ this License (including any patent licenses granted under the third
413
+ paragraph of section 11).
414
+
415
+ However, if you cease all violation of this License, then your
416
+ license from a particular copyright holder is reinstated (a)
417
+ provisionally, unless and until the copyright holder explicitly and
418
+ finally terminates your license, and (b) permanently, if the copyright
419
+ holder fails to notify you of the violation by some reasonable means
420
+ prior to 60 days after the cessation.
421
+
422
+ Moreover, your license from a particular copyright holder is
423
+ reinstated permanently if the copyright holder notifies you of the
424
+ violation by some reasonable means, this is the first time you have
425
+ received notice of violation of this License (for any work) from that
426
+ copyright holder, and you cure the violation prior to 30 days after
427
+ your receipt of the notice.
428
+
429
+ Termination of your rights under this section does not terminate the
430
+ licenses of parties who have received copies or rights from you under
431
+ this License. If your rights have been terminated and not permanently
432
+ reinstated, you do not qualify to receive new licenses for the same
433
+ material under section 10.
434
+
435
+ 9. Acceptance Not Required for Having Copies.
436
+
437
+ You are not required to accept this License in order to receive or
438
+ run a copy of the Program. Ancillary propagation of a covered work
439
+ occurring solely as a consequence of using peer-to-peer transmission
440
+ to receive a copy likewise does not require acceptance. However,
441
+ nothing other than this License grants you permission to propagate or
442
+ modify any covered work. These actions infringe copyright if you do
443
+ not accept this License. Therefore, by modifying or propagating a
444
+ covered work, you indicate your acceptance of this License to do so.
445
+
446
+ 10. Automatic Licensing of Downstream Recipients.
447
+
448
+ Each time you convey a covered work, the recipient automatically
449
+ receives a license from the original licensors, to run, modify and
450
+ propagate that work, subject to this License. You are not responsible
451
+ for enforcing compliance by third parties with this License.
452
+
453
+ An "entity transaction" is a transaction transferring control of an
454
+ organization, or substantially all assets of one, or subdividing an
455
+ organization, or merging organizations. If propagation of a covered
456
+ work results from an entity transaction, each party to that
457
+ transaction who receives a copy of the work also receives whatever
458
+ licenses to the work the party's predecessor in interest had or could
459
+ give under the previous paragraph, plus a right to possession of the
460
+ Corresponding Source of the work from the predecessor in interest, if
461
+ the predecessor has it or can get it with reasonable efforts.
462
+
463
+ You may not impose any further restrictions on the exercise of the
464
+ rights granted or affirmed under this License. For example, you may
465
+ not impose a license fee, royalty, or other charge for exercise of
466
+ rights granted under this License, and you may not initiate litigation
467
+ (including a cross-claim or counterclaim in a lawsuit) alleging that
468
+ any patent claim is infringed by making, using, selling, offering for
469
+ sale, or importing the Program or any portion of it.
470
+
471
+ 11. Patents.
472
+
473
+ A "contributor" is a copyright holder who authorizes use under this
474
+ License of the Program or a work on which the Program is based. The
475
+ work thus licensed is called the contributor's "contributor version".
476
+
477
+ A contributor's "essential patent claims" are all patent claims
478
+ owned or controlled by the contributor, whether already acquired or
479
+ hereafter acquired, that would be infringed by some manner, permitted
480
+ by this License, of making, using, or selling its contributor version,
481
+ but do not include claims that would be infringed only as a
482
+ consequence of further modification of the contributor version. For
483
+ purposes of this definition, "control" includes the right to grant
484
+ patent sublicenses in a manner consistent with the requirements of
485
+ this License.
486
+
487
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
488
+ patent license under the contributor's essential patent claims, to
489
+ make, use, sell, offer for sale, import and otherwise run, modify and
490
+ propagate the contents of its contributor version.
491
+
492
+ In the following three paragraphs, a "patent license" is any express
493
+ agreement or commitment, however denominated, not to enforce a patent
494
+ (such as an express permission to practice a patent or covenant not to
495
+ sue for patent infringement). To "grant" such a patent license to a
496
+ party means to make such an agreement or commitment not to enforce a
497
+ patent against the party.
498
+
499
+ If you convey a covered work, knowingly relying on a patent license,
500
+ and the Corresponding Source of the work is not available for anyone
501
+ to copy, free of charge and under the terms of this License, through a
502
+ publicly available network server or other readily accessible means,
503
+ then you must either (1) cause the Corresponding Source to be so
504
+ available, or (2) arrange to deprive yourself of the benefit of the
505
+ patent license for this particular work, or (3) arrange, in a manner
506
+ consistent with the requirements of this License, to extend the patent
507
+ license to downstream recipients. "Knowingly relying" means you have
508
+ actual knowledge that, but for the patent license, your conveying the
509
+ covered work in a country, or your recipient's use of the covered work
510
+ in a country, would infringe one or more identifiable patents in that
511
+ country that you have reason to believe are valid.
512
+
513
+ If, pursuant to or in connection with a single transaction or
514
+ arrangement, you convey, or propagate by procuring conveyance of, a
515
+ covered work, and grant a patent license to some of the parties
516
+ receiving the covered work authorizing them to use, propagate, modify
517
+ or convey a specific copy of the covered work, then the patent license
518
+ you grant is automatically extended to all recipients of the covered
519
+ work and works based on it.
520
+
521
+ A patent license is "discriminatory" if it does not include within
522
+ the scope of its coverage, prohibits the exercise of, or is
523
+ conditioned on the non-exercise of one or more of the rights that are
524
+ specifically granted under this License. You may not convey a covered
525
+ work if you are a party to an arrangement with a third party that is
526
+ in the business of distributing software, under which you make payment
527
+ to the third party based on the extent of your activity of conveying
528
+ the work, and under which the third party grants, to any of the
529
+ parties who would receive the covered work from you, a discriminatory
530
+ patent license (a) in connection with copies of the covered work
531
+ conveyed by you (or copies made from those copies), or (b) primarily
532
+ for and in connection with specific products or compilations that
533
+ contain the covered work, unless you entered into that arrangement,
534
+ or that patent license was granted, prior to 28 March 2007.
535
+
536
+ Nothing in this License shall be construed as excluding or limiting
537
+ any implied license or other defenses to infringement that may
538
+ otherwise be available to you under applicable patent law.
539
+
540
+ 12. No Surrender of Others' Freedom.
541
+
542
+ If conditions are imposed on you (whether by court order, agreement or
543
+ otherwise) that contradict the conditions of this License, they do not
544
+ excuse you from the conditions of this License. If you cannot convey a
545
+ covered work so as to satisfy simultaneously your obligations under this
546
+ License and any other pertinent obligations, then as a consequence you may
547
+ not convey it at all. For example, if you agree to terms that obligate you
548
+ to collect a royalty for further conveying from those to whom you convey
549
+ the Program, the only way you could satisfy both those terms and this
550
+ License would be to refrain entirely from conveying the Program.
551
+
552
+ 13. Use with the GNU Affero General Public License.
553
+
554
+ Notwithstanding any other provision of this License, you have
555
+ permission to link or combine any covered work with a work licensed
556
+ under version 3 of the GNU Affero General Public License into a single
557
+ combined work, and to convey the resulting work. The terms of this
558
+ License will continue to apply to the part which is the covered work,
559
+ but the special requirements of the GNU Affero General Public License,
560
+ section 13, concerning interaction through a network will apply to the
561
+ combination as such.
562
+
563
+ 14. Revised Versions of this License.
564
+
565
+ The Free Software Foundation may publish revised and/or new versions of
566
+ the GNU General Public License from time to time. Such new versions will
567
+ be similar in spirit to the present version, but may differ in detail to
568
+ address new problems or concerns.
569
+
570
+ Each version is given a distinguishing version number. If the
571
+ Program specifies that a certain numbered version of the GNU General
572
+ Public License "or any later version" applies to it, you have the
573
+ option of following the terms and conditions either of that numbered
574
+ version or of any later version published by the Free Software
575
+ Foundation. If the Program does not specify a version number of the
576
+ GNU General Public License, you may choose any version ever published
577
+ by the Free Software Foundation.
578
+
579
+ If the Program specifies that a proxy can decide which future
580
+ versions of the GNU General Public License can be used, that proxy's
581
+ public statement of acceptance of a version permanently authorizes you
582
+ to choose that version for the Program.
583
+
584
+ Later license versions may give you additional or different
585
+ permissions. However, no additional obligations are imposed on any
586
+ author or copyright holder as a result of your choosing to follow a
587
+ later version.
588
+
589
+ 15. Disclaimer of Warranty.
590
+
591
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592
+ APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593
+ HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594
+ OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595
+ THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596
+ PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597
+ IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598
+ ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599
+
600
+ 16. Limitation of Liability.
601
+
602
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603
+ WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604
+ THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605
+ GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606
+ USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607
+ DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608
+ PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609
+ EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610
+ SUCH DAMAGES.
611
+
612
+ 17. Interpretation of Sections 15 and 16.
613
+
614
+ If the disclaimer of warranty and limitation of liability provided
615
+ above cannot be given local legal effect according to their terms,
616
+ reviewing courts shall apply local law that most closely approximates
617
+ an absolute waiver of all civil liability in connection with the
618
+ Program, unless a warranty or assumption of liability accompanies a
619
+ copy of the Program in return for a fee.
620
+
621
+ END OF TERMS AND CONDITIONS
622
+
623
+ How to Apply These Terms to Your New Programs
624
+
625
+ If you develop a new program, and you want it to be of the greatest
626
+ possible use to the public, the best way to achieve this is to make it
627
+ free software which everyone can redistribute and change under these terms.
628
+
629
+ To do so, attach the following notices to the program. It is safest
630
+ to attach them to the start of each source file to most effectively
631
+ state the exclusion of warranty; and each file should have at least
632
+ the "copyright" line and a pointer to where the full notice is found.
633
+
634
+ <one line to give the program's name and a brief idea of what it does.>
635
+ Copyright (C) <year> <name of author>
636
+
637
+ This program is free software: you can redistribute it and/or modify
638
+ it under the terms of the GNU General Public License as published by
639
+ the Free Software Foundation, either version 3 of the License, or
640
+ (at your option) any later version.
641
+
642
+ This program is distributed in the hope that it will be useful,
643
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
644
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
645
+ GNU General Public License for more details.
646
+
647
+ You should have received a copy of the GNU General Public License
648
+ along with this program. If not, see <https://www.gnu.org/licenses/>.
649
+
650
+ Also add information on how to contact you by electronic and paper mail.
651
+
652
+ If the program does terminal interaction, make it output a short
653
+ notice like this when it starts in an interactive mode:
654
+
655
+ <program> Copyright (C) <year> <name of author>
656
+ This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
657
+ This is free software, and you are welcome to redistribute it
658
+ under certain conditions; type `show c' for details.
659
+
660
+ The hypothetical commands `show w' and `show c' should show the appropriate
661
+ parts of the General Public License. Of course, your program's commands
662
+ might be different; for a GUI interface, you would use an "about box".
663
+
664
+ You should also get your employer (if you work as a programmer) or school,
665
+ if any, to sign a "copyright disclaimer" for the program, if necessary.
666
+ For more information on this, and how to apply and follow the GNU GPL, see
667
+ <https://www.gnu.org/licenses/>.
668
+
669
+ The GNU General Public License does not permit incorporating your program
670
+ into proprietary programs. If your program is a subroutine library, you
671
+ may consider it more useful to permit linking proprietary applications with
672
+ the library. If this is what you want to do, use the GNU Lesser General
673
+ Public License instead of this License. But first, please read
674
+ <https://www.gnu.org/licenses/why-not-lgpl.html>.
__init__.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import os
3
+
4
+ repo_dir = os.path.dirname(os.path.realpath(__file__))
5
+ sys.path.insert(0, repo_dir)
6
+ original_modules = sys.modules.copy()
7
+
8
+ # Place aside existing modules if using a1111 web ui
9
+ modules_used = [
10
+ "modules",
11
+ "modules.images",
12
+ "modules.processing",
13
+ "modules.scripts_postprocessing",
14
+ "modules.scripts",
15
+ "modules.shared",
16
+ ]
17
+ original_webui_modules = {}
18
+ for module in modules_used:
19
+ if module in sys.modules:
20
+ original_webui_modules[module] = sys.modules.pop(module)
21
+
22
+ # Proceed with node setup
23
+ from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
24
+
25
+ __all__ = ["NODE_CLASS_MAPPINGS", "NODE_DISPLAY_NAME_MAPPINGS"]
26
+
27
+ # Clean up imports
28
+ # Remove repo directory from path
29
+ sys.path.remove(repo_dir)
30
+ # Remove any new modules
31
+ modules_to_remove = []
32
+ for module in sys.modules:
33
+ if module not in original_modules and not module.startswith("google.protobuf") and not module.startswith("onnx") and not module.startswith("cv2"):
34
+ modules_to_remove.append(module)
35
+ for module in modules_to_remove:
36
+ del sys.modules[module]
37
+
38
+ # Restore original modules
39
+ sys.modules.update(original_webui_modules)
install.bat ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ @echo off
2
+ setlocal enabledelayedexpansion
3
+
4
+ :: Try to use embedded python first
5
+ if exist ..\..\..\python_embeded\python.exe (
6
+ :: Use the embedded python
7
+ set PYTHON=..\..\..\python_embeded\python.exe
8
+ ) else (
9
+ :: Embedded python not found, check for python in the PATH
10
+ for /f "tokens=* USEBACKQ" %%F in (`python --version 2^>^&1`) do (
11
+ set PYTHON_VERSION=%%F
12
+ )
13
+ if errorlevel 1 (
14
+ echo I couldn't find an embedded version of Python, nor one in the Windows PATH. Please install manually.
15
+ pause
16
+ exit /b 1
17
+ ) else (
18
+ :: Use python from the PATH (if it's the right version and the user agrees)
19
+ echo I couldn't find an embedded version of Python, but I did find !PYTHON_VERSION! in your Windows PATH.
20
+ echo Would you like to proceed with the install using that version? (Y/N^)
21
+ set /p USE_PYTHON=
22
+ if /i "!USE_PYTHON!"=="Y" (
23
+ set PYTHON=python
24
+ ) else (
25
+ echo Okay. Please install manually.
26
+ pause
27
+ exit /b 1
28
+ )
29
+ )
30
+ )
31
+
32
+ :: Install the package
33
+ echo Installing...
34
+ %PYTHON% install.py
35
+ echo Done^!
36
+
37
+ @pause
install.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import warnings
2
+ warnings.filterwarnings("ignore", category=DeprecationWarning)
3
+
4
+ import subprocess
5
+ import os, sys
6
+ try:
7
+ from pkg_resources import get_distribution as distributions
8
+ except:
9
+ from importlib_metadata import distributions
10
+ from tqdm import tqdm
11
+ import urllib.request
12
+ from packaging import version as pv
13
+ try:
14
+ from folder_paths import models_dir
15
+ except:
16
+ from pathlib import Path
17
+ models_dir = os.path.join(Path(__file__).parents[2], "models")
18
+
19
+ sys.path.append(os.path.dirname(os.path.realpath(__file__)))
20
+
21
+ req_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), "requirements.txt")
22
+
23
+ model_url = "https://huggingface.co/datasets/Gourieff/ReActor/resolve/main/models/inswapper_128.onnx"
24
+ model_name = os.path.basename(model_url)
25
+ models_dir_path = os.path.join(models_dir, "insightface")
26
+ model_path = os.path.join(models_dir_path, model_name)
27
+
28
+ def run_pip(*args):
29
+ subprocess.run([sys.executable, "-m", "pip", "install", "--no-warn-script-location", *args])
30
+
31
+ def is_installed (
32
+ package: str, version: str = None, strict: bool = True
33
+ ):
34
+ has_package = None
35
+ try:
36
+ has_package = distributions(package)
37
+ if has_package is not None:
38
+ if version is not None:
39
+ installed_version = has_package.version
40
+ if (installed_version != version and strict == True) or (pv.parse(installed_version) < pv.parse(version) and strict == False):
41
+ return False
42
+ else:
43
+ return True
44
+ else:
45
+ return True
46
+ else:
47
+ return False
48
+ except Exception as e:
49
+ print(f"Status: {e}")
50
+ return False
51
+
52
+ def download(url, path, name):
53
+ request = urllib.request.urlopen(url)
54
+ total = int(request.headers.get('Content-Length', 0))
55
+ with tqdm(total=total, desc=f'[ReActor] Downloading {name} to {path}', unit='B', unit_scale=True, unit_divisor=1024) as progress:
56
+ urllib.request.urlretrieve(url, path, reporthook=lambda count, block_size, total_size: progress.update(block_size))
57
+
58
+ if not os.path.exists(models_dir_path):
59
+ os.makedirs(models_dir_path)
60
+
61
+ if not os.path.exists(model_path):
62
+ download(model_url, model_path, model_name)
63
+
64
+ with open(req_file) as file:
65
+ try:
66
+ ort = "onnxruntime-gpu"
67
+ import torch
68
+ cuda_version = None
69
+ if torch.cuda.is_available():
70
+ cuda_version = torch.version.cuda
71
+ print(f"CUDA {cuda_version}")
72
+ elif torch.backends.mps.is_available() or hasattr(torch,'dml') or hasattr(torch,'privateuseone'):
73
+ ort = "onnxruntime"
74
+ if cuda_version is not None and float(cuda_version)>=12 and torch.torch_version.__version__ <= "2.2.0": # CU12.x and torch<=2.2.0
75
+ print(f"Torch: {torch.torch_version.__version__}")
76
+ if not is_installed(ort,"1.17.0",False):
77
+ run_pip(ort,"--extra-index-url", "https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/")
78
+ elif cuda_version is not None and float(cuda_version)>=12 and torch.torch_version.__version__ >= "2.4.0" : # CU12.x and latest torch
79
+ print(f"Torch: {torch.torch_version.__version__}")
80
+ if not is_installed(ort,"1.20.1",False): # latest ort-gpu
81
+ run_pip(ort,"-U")
82
+ elif not is_installed(ort,"1.16.1",False):
83
+ run_pip(ort, "-U")
84
+ except Exception as e:
85
+ print(e)
86
+ print(f"Warning: Failed to install {ort}, ReActor will not work.")
87
+ raise e
88
+ strict = True
89
+ for package in file:
90
+ package_version = None
91
+ try:
92
+ package = package.strip()
93
+ if "==" in package:
94
+ package_version = package.split('==')[1]
95
+ elif ">=" in package:
96
+ package_version = package.split('>=')[1]
97
+ strict = False
98
+ if not is_installed(package,package_version,strict):
99
+ run_pip(package)
100
+ except Exception as e:
101
+ print(e)
102
+ print(f"Warning: Failed to install {package}, ReActor will not work.")
103
+ raise e
104
+ print("Ok")
nodes.py ADDED
@@ -0,0 +1,1237 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os, glob, sys
2
+ import logging
3
+
4
+ import torch
5
+ import torch.nn.functional as torchfn
6
+ from torchvision.transforms.functional import normalize
7
+ from torchvision.ops import masks_to_boxes
8
+
9
+ import numpy as np
10
+ import cv2
11
+ import math
12
+ from typing import List
13
+ from PIL import Image
14
+ from scipy import stats
15
+ from insightface.app.common import Face
16
+ from segment_anything import sam_model_registry
17
+
18
+ from modules.processing import StableDiffusionProcessingImg2Img
19
+ from modules.shared import state
20
+ # from comfy_extras.chainner_models import model_loading
21
+ import comfy.model_management as model_management
22
+ import comfy.utils
23
+ import folder_paths
24
+
25
+ import scripts.reactor_version
26
+ from r_chainner import model_loading
27
+ from scripts.reactor_faceswap import (
28
+ FaceSwapScript,
29
+ get_models,
30
+ get_current_faces_model,
31
+ analyze_faces,
32
+ half_det_size,
33
+ providers
34
+ )
35
+ from scripts.reactor_swapper import (
36
+ unload_all_models,
37
+ )
38
+ from scripts.reactor_logger import logger
39
+ from reactor_utils import (
40
+ batch_tensor_to_pil,
41
+ batched_pil_to_tensor,
42
+ tensor_to_pil,
43
+ img2tensor,
44
+ tensor2img,
45
+ save_face_model,
46
+ load_face_model,
47
+ download,
48
+ set_ort_session,
49
+ prepare_cropped_face,
50
+ normalize_cropped_face,
51
+ add_folder_path_and_extensions,
52
+ rgba2rgb_tensor
53
+ )
54
+ from reactor_patcher import apply_patch
55
+ from r_facelib.utils.face_restoration_helper import FaceRestoreHelper
56
+ from r_basicsr.utils.registry import ARCH_REGISTRY
57
+ import scripts.r_archs.codeformer_arch
58
+ import scripts.r_masking.subcore as subcore
59
+ import scripts.r_masking.core as core
60
+ import scripts.r_masking.segs as masking_segs
61
+
62
+
63
+ models_dir = folder_paths.models_dir
64
+ REACTOR_MODELS_PATH = os.path.join(models_dir, "reactor")
65
+ FACE_MODELS_PATH = os.path.join(REACTOR_MODELS_PATH, "faces")
66
+
67
+ if not os.path.exists(REACTOR_MODELS_PATH):
68
+ os.makedirs(REACTOR_MODELS_PATH)
69
+ if not os.path.exists(FACE_MODELS_PATH):
70
+ os.makedirs(FACE_MODELS_PATH)
71
+
72
+ dir_facerestore_models = os.path.join(models_dir, "facerestore_models")
73
+ os.makedirs(dir_facerestore_models, exist_ok=True)
74
+ folder_paths.folder_names_and_paths["facerestore_models"] = ([dir_facerestore_models], folder_paths.supported_pt_extensions)
75
+
76
+ BLENDED_FACE_MODEL = None
77
+ FACE_SIZE: int = 512
78
+ FACE_HELPER = None
79
+
80
+ if "ultralytics" not in folder_paths.folder_names_and_paths:
81
+ add_folder_path_and_extensions("ultralytics_bbox", [os.path.join(models_dir, "ultralytics", "bbox")], folder_paths.supported_pt_extensions)
82
+ add_folder_path_and_extensions("ultralytics_segm", [os.path.join(models_dir, "ultralytics", "segm")], folder_paths.supported_pt_extensions)
83
+ add_folder_path_and_extensions("ultralytics", [os.path.join(models_dir, "ultralytics")], folder_paths.supported_pt_extensions)
84
+ if "sams" not in folder_paths.folder_names_and_paths:
85
+ add_folder_path_and_extensions("sams", [os.path.join(models_dir, "sams")], folder_paths.supported_pt_extensions)
86
+
87
+ def get_facemodels():
88
+ models_path = os.path.join(FACE_MODELS_PATH, "*")
89
+ models = glob.glob(models_path)
90
+ models = [x for x in models if x.endswith(".safetensors")]
91
+ return models
92
+
93
+ def get_restorers():
94
+ models_path = os.path.join(models_dir, "facerestore_models/*")
95
+ models = glob.glob(models_path)
96
+ models = [x for x in models if (x.endswith(".pth") or x.endswith(".onnx"))]
97
+ if len(models) == 0:
98
+ fr_urls = [
99
+ "https://huggingface.co/datasets/Gourieff/ReActor/resolve/main/models/facerestore_models/GFPGANv1.3.pth",
100
+ "https://huggingface.co/datasets/Gourieff/ReActor/resolve/main/models/facerestore_models/GFPGANv1.4.pth",
101
+ "https://huggingface.co/datasets/Gourieff/ReActor/resolve/main/models/facerestore_models/codeformer-v0.1.0.pth",
102
+ "https://huggingface.co/datasets/Gourieff/ReActor/resolve/main/models/facerestore_models/GPEN-BFR-512.onnx",
103
+ "https://huggingface.co/datasets/Gourieff/ReActor/resolve/main/models/facerestore_models/GPEN-BFR-1024.onnx",
104
+ "https://huggingface.co/datasets/Gourieff/ReActor/resolve/main/models/facerestore_models/GPEN-BFR-2048.onnx",
105
+ ]
106
+ for model_url in fr_urls:
107
+ model_name = os.path.basename(model_url)
108
+ model_path = os.path.join(dir_facerestore_models, model_name)
109
+ download(model_url, model_path, model_name)
110
+ models = glob.glob(models_path)
111
+ models = [x for x in models if (x.endswith(".pth") or x.endswith(".onnx"))]
112
+ return models
113
+
114
+ def get_model_names(get_models):
115
+ models = get_models()
116
+ names = []
117
+ for x in models:
118
+ names.append(os.path.basename(x))
119
+ names.sort(key=str.lower)
120
+ names.insert(0, "none")
121
+ return names
122
+
123
+ def model_names():
124
+ models = get_models()
125
+ return {os.path.basename(x): x for x in models}
126
+
127
+
128
+ class reactor:
129
+ @classmethod
130
+ def INPUT_TYPES(s):
131
+ return {
132
+ "required": {
133
+ "enabled": ("BOOLEAN", {"default": True, "label_off": "OFF", "label_on": "ON"}),
134
+ "input_image": ("IMAGE",),
135
+ "swap_model": (list(model_names().keys()),),
136
+ "facedetection": (["retinaface_resnet50", "retinaface_mobile0.25", "YOLOv5l", "YOLOv5n"],),
137
+ "face_restore_model": (get_model_names(get_restorers),),
138
+ "face_restore_visibility": ("FLOAT", {"default": 1, "min": 0.1, "max": 1, "step": 0.05}),
139
+ "codeformer_weight": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1, "step": 0.05}),
140
+ "detect_gender_input": (["no","female","male"], {"default": "no"}),
141
+ "detect_gender_source": (["no","female","male"], {"default": "no"}),
142
+ "input_faces_index": ("STRING", {"default": "0"}),
143
+ "source_faces_index": ("STRING", {"default": "0"}),
144
+ "console_log_level": ([0, 1, 2], {"default": 1}),
145
+ },
146
+ "optional": {
147
+ "source_image": ("IMAGE",),
148
+ "face_model": ("FACE_MODEL",),
149
+ "face_boost": ("FACE_BOOST",),
150
+ },
151
+ "hidden": {"faces_order": "FACES_ORDER"},
152
+ }
153
+
154
+ RETURN_TYPES = ("IMAGE","FACE_MODEL")
155
+ FUNCTION = "execute"
156
+ CATEGORY = "🌌 ReActor"
157
+
158
+ def __init__(self):
159
+ # self.face_helper = None
160
+ self.faces_order = ["large-small", "large-small"]
161
+ # self.face_size = FACE_SIZE
162
+ self.face_boost_enabled = False
163
+ self.restore = True
164
+ self.boost_model = None
165
+ self.interpolation = "Bicubic"
166
+ self.boost_model_visibility = 1
167
+ self.boost_cf_weight = 0.5
168
+
169
+ def restore_face(
170
+ self,
171
+ input_image,
172
+ face_restore_model,
173
+ face_restore_visibility,
174
+ codeformer_weight,
175
+ facedetection,
176
+ ):
177
+
178
+ result = input_image
179
+
180
+ if face_restore_model != "none" and not model_management.processing_interrupted():
181
+
182
+ global FACE_SIZE, FACE_HELPER
183
+
184
+ self.face_helper = FACE_HELPER
185
+
186
+ faceSize = 512
187
+ if "1024" in face_restore_model.lower():
188
+ faceSize = 1024
189
+ elif "2048" in face_restore_model.lower():
190
+ faceSize = 2048
191
+
192
+ logger.status(f"Restoring with {face_restore_model} | Face Size is set to {faceSize}")
193
+
194
+ model_path = folder_paths.get_full_path("facerestore_models", face_restore_model)
195
+
196
+ device = model_management.get_torch_device()
197
+
198
+ if "codeformer" in face_restore_model.lower():
199
+
200
+ codeformer_net = ARCH_REGISTRY.get("CodeFormer")(
201
+ dim_embd=512,
202
+ codebook_size=1024,
203
+ n_head=8,
204
+ n_layers=9,
205
+ connect_list=["32", "64", "128", "256"],
206
+ ).to(device)
207
+ checkpoint = torch.load(model_path)["params_ema"]
208
+ codeformer_net.load_state_dict(checkpoint)
209
+ facerestore_model = codeformer_net.eval()
210
+
211
+ elif ".onnx" in face_restore_model:
212
+
213
+ ort_session = set_ort_session(model_path, providers=providers)
214
+ ort_session_inputs = {}
215
+ facerestore_model = ort_session
216
+
217
+ else:
218
+
219
+ sd = comfy.utils.load_torch_file(model_path, safe_load=True)
220
+ facerestore_model = model_loading.load_state_dict(sd).eval()
221
+ facerestore_model.to(device)
222
+
223
+ if faceSize != FACE_SIZE or self.face_helper is None:
224
+ self.face_helper = FaceRestoreHelper(1, face_size=faceSize, crop_ratio=(1, 1), det_model=facedetection, save_ext='png', use_parse=True, device=device)
225
+ FACE_SIZE = faceSize
226
+ FACE_HELPER = self.face_helper
227
+
228
+ image_np = 255. * result.numpy()
229
+
230
+ total_images = image_np.shape[0]
231
+
232
+ out_images = []
233
+
234
+ for i in range(total_images):
235
+
236
+ if total_images > 1:
237
+ logger.status(f"Restoring {i+1}")
238
+
239
+ cur_image_np = image_np[i,:, :, ::-1]
240
+
241
+ original_resolution = cur_image_np.shape[0:2]
242
+
243
+ if facerestore_model is None or self.face_helper is None:
244
+ return result
245
+
246
+ self.face_helper.clean_all()
247
+ self.face_helper.read_image(cur_image_np)
248
+ self.face_helper.get_face_landmarks_5(only_center_face=False, resize=640, eye_dist_threshold=5)
249
+ self.face_helper.align_warp_face()
250
+
251
+ restored_face = None
252
+
253
+ for idx, cropped_face in enumerate(self.face_helper.cropped_faces):
254
+
255
+ # if ".pth" in face_restore_model:
256
+ cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True)
257
+ normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
258
+ cropped_face_t = cropped_face_t.unsqueeze(0).to(device)
259
+
260
+ try:
261
+
262
+ with torch.no_grad():
263
+
264
+ if ".onnx" in face_restore_model: # ONNX models
265
+
266
+ for ort_session_input in ort_session.get_inputs():
267
+ if ort_session_input.name == "input":
268
+ cropped_face_prep = prepare_cropped_face(cropped_face)
269
+ ort_session_inputs[ort_session_input.name] = cropped_face_prep
270
+ if ort_session_input.name == "weight":
271
+ weight = np.array([ 1 ], dtype = np.double)
272
+ ort_session_inputs[ort_session_input.name] = weight
273
+
274
+ output = ort_session.run(None, ort_session_inputs)[0][0]
275
+ restored_face = normalize_cropped_face(output)
276
+
277
+ else: # PTH models
278
+
279
+ output = facerestore_model(cropped_face_t, w=codeformer_weight)[0] if "codeformer" in face_restore_model.lower() else facerestore_model(cropped_face_t)[0]
280
+ restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1))
281
+
282
+ del output
283
+ torch.cuda.empty_cache()
284
+
285
+ except Exception as error:
286
+
287
+ print(f"\tFailed inference: {error}", file=sys.stderr)
288
+ restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1))
289
+
290
+ if face_restore_visibility < 1:
291
+ restored_face = cropped_face * (1 - face_restore_visibility) + restored_face * face_restore_visibility
292
+
293
+ restored_face = restored_face.astype("uint8")
294
+ self.face_helper.add_restored_face(restored_face)
295
+
296
+ self.face_helper.get_inverse_affine(None)
297
+
298
+ restored_img = self.face_helper.paste_faces_to_input_image()
299
+ restored_img = restored_img[:, :, ::-1]
300
+
301
+ if original_resolution != restored_img.shape[0:2]:
302
+ restored_img = cv2.resize(restored_img, (0, 0), fx=original_resolution[1]/restored_img.shape[1], fy=original_resolution[0]/restored_img.shape[0], interpolation=cv2.INTER_AREA)
303
+
304
+ self.face_helper.clean_all()
305
+
306
+ # out_images[i] = restored_img
307
+ out_images.append(restored_img)
308
+
309
+ if state.interrupted or model_management.processing_interrupted():
310
+ logger.status("Interrupted by User")
311
+ return input_image
312
+
313
+ restored_img_np = np.array(out_images).astype(np.float32) / 255.0
314
+ restored_img_tensor = torch.from_numpy(restored_img_np)
315
+
316
+ result = restored_img_tensor
317
+
318
+ return result
319
+
320
+ def execute(self, enabled, input_image, swap_model, detect_gender_source, detect_gender_input, source_faces_index, input_faces_index, console_log_level, face_restore_model,face_restore_visibility, codeformer_weight, facedetection, source_image=None, face_model=None, faces_order=None, face_boost=None):
321
+
322
+ if face_boost is not None:
323
+ self.face_boost_enabled = face_boost["enabled"]
324
+ self.boost_model = face_boost["boost_model"]
325
+ self.interpolation = face_boost["interpolation"]
326
+ self.boost_model_visibility = face_boost["visibility"]
327
+ self.boost_cf_weight = face_boost["codeformer_weight"]
328
+ self.restore = face_boost["restore_with_main_after"]
329
+ else:
330
+ self.face_boost_enabled = False
331
+
332
+ if faces_order is None:
333
+ faces_order = self.faces_order
334
+
335
+ apply_patch(console_log_level)
336
+
337
+ if not enabled:
338
+ return (input_image,face_model)
339
+ elif source_image is None and face_model is None:
340
+ logger.error("Please provide 'source_image' or `face_model`")
341
+ return (input_image,face_model)
342
+
343
+ if face_model == "none":
344
+ face_model = None
345
+
346
+ script = FaceSwapScript()
347
+ pil_images = batch_tensor_to_pil(input_image)
348
+ if source_image is not None:
349
+ source = tensor_to_pil(source_image)
350
+ else:
351
+ source = None
352
+ p = StableDiffusionProcessingImg2Img(pil_images)
353
+ script.process(
354
+ p=p,
355
+ img=source,
356
+ enable=True,
357
+ source_faces_index=source_faces_index,
358
+ faces_index=input_faces_index,
359
+ model=swap_model,
360
+ swap_in_source=True,
361
+ swap_in_generated=True,
362
+ gender_source=detect_gender_source,
363
+ gender_target=detect_gender_input,
364
+ face_model=face_model,
365
+ faces_order=faces_order,
366
+ # face boost:
367
+ face_boost_enabled=self.face_boost_enabled,
368
+ face_restore_model=self.boost_model,
369
+ face_restore_visibility=self.boost_model_visibility,
370
+ codeformer_weight=self.boost_cf_weight,
371
+ interpolation=self.interpolation,
372
+ )
373
+ result = batched_pil_to_tensor(p.init_images)
374
+
375
+ if face_model is None:
376
+ current_face_model = get_current_faces_model()
377
+ face_model_to_provide = current_face_model[0] if (current_face_model is not None and len(current_face_model) > 0) else face_model
378
+ else:
379
+ face_model_to_provide = face_model
380
+
381
+ if self.restore or not self.face_boost_enabled:
382
+ result = reactor.restore_face(self,result,face_restore_model,face_restore_visibility,codeformer_weight,facedetection)
383
+
384
+ return (result,face_model_to_provide)
385
+
386
+
387
+ class ReActorPlusOpt:
388
+ @classmethod
389
+ def INPUT_TYPES(s):
390
+ return {
391
+ "required": {
392
+ "enabled": ("BOOLEAN", {"default": True, "label_off": "OFF", "label_on": "ON"}),
393
+ "input_image": ("IMAGE",),
394
+ "swap_model": (list(model_names().keys()),),
395
+ "facedetection": (["retinaface_resnet50", "retinaface_mobile0.25", "YOLOv5l", "YOLOv5n"],),
396
+ "face_restore_model": (get_model_names(get_restorers),),
397
+ "face_restore_visibility": ("FLOAT", {"default": 1, "min": 0.1, "max": 1, "step": 0.05}),
398
+ "codeformer_weight": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1, "step": 0.05}),
399
+ },
400
+ "optional": {
401
+ "source_image": ("IMAGE",),
402
+ "face_model": ("FACE_MODEL",),
403
+ "options": ("OPTIONS",),
404
+ "face_boost": ("FACE_BOOST",),
405
+ }
406
+ }
407
+
408
+ RETURN_TYPES = ("IMAGE","FACE_MODEL")
409
+ FUNCTION = "execute"
410
+ CATEGORY = "🌌 ReActor"
411
+
412
+ def __init__(self):
413
+ # self.face_helper = None
414
+ self.faces_order = ["large-small", "large-small"]
415
+ self.detect_gender_input = "no"
416
+ self.detect_gender_source = "no"
417
+ self.input_faces_index = "0"
418
+ self.source_faces_index = "0"
419
+ self.console_log_level = 1
420
+ # self.face_size = 512
421
+ self.face_boost_enabled = False
422
+ self.restore = True
423
+ self.boost_model = None
424
+ self.interpolation = "Bicubic"
425
+ self.boost_model_visibility = 1
426
+ self.boost_cf_weight = 0.5
427
+
428
+ def execute(self, enabled, input_image, swap_model, facedetection, face_restore_model, face_restore_visibility, codeformer_weight, source_image=None, face_model=None, options=None, face_boost=None):
429
+
430
+ if options is not None:
431
+ self.faces_order = [options["input_faces_order"], options["source_faces_order"]]
432
+ self.console_log_level = options["console_log_level"]
433
+ self.detect_gender_input = options["detect_gender_input"]
434
+ self.detect_gender_source = options["detect_gender_source"]
435
+ self.input_faces_index = options["input_faces_index"]
436
+ self.source_faces_index = options["source_faces_index"]
437
+
438
+ if face_boost is not None:
439
+ self.face_boost_enabled = face_boost["enabled"]
440
+ self.restore = face_boost["restore_with_main_after"]
441
+ else:
442
+ self.face_boost_enabled = False
443
+
444
+ result = reactor.execute(
445
+ self,enabled,input_image,swap_model,self.detect_gender_source,self.detect_gender_input,self.source_faces_index,self.input_faces_index,self.console_log_level,face_restore_model,face_restore_visibility,codeformer_weight,facedetection,source_image,face_model,self.faces_order, face_boost=face_boost
446
+ )
447
+
448
+ return result
449
+
450
+
451
+ class LoadFaceModel:
452
+ @classmethod
453
+ def INPUT_TYPES(s):
454
+ return {
455
+ "required": {
456
+ "face_model": (get_model_names(get_facemodels),),
457
+ }
458
+ }
459
+
460
+ RETURN_TYPES = ("FACE_MODEL",)
461
+ FUNCTION = "load_model"
462
+ CATEGORY = "🌌 ReActor"
463
+
464
+ def load_model(self, face_model):
465
+ self.face_model = face_model
466
+ self.face_models_path = FACE_MODELS_PATH
467
+ if self.face_model != "none":
468
+ face_model_path = os.path.join(self.face_models_path, self.face_model)
469
+ out = load_face_model(face_model_path)
470
+ else:
471
+ out = None
472
+ return (out, )
473
+
474
+
475
+ class BuildFaceModel:
476
+ def __init__(self):
477
+ self.output_dir = FACE_MODELS_PATH
478
+
479
+ @classmethod
480
+ def INPUT_TYPES(s):
481
+ return {
482
+ "required": {
483
+ "save_mode": ("BOOLEAN", {"default": True, "label_off": "OFF", "label_on": "ON"}),
484
+ "send_only": ("BOOLEAN", {"default": False, "label_off": "NO", "label_on": "YES"}),
485
+ "face_model_name": ("STRING", {"default": "default"}),
486
+ "compute_method": (["Mean", "Median", "Mode"], {"default": "Mean"}),
487
+ },
488
+ "optional": {
489
+ "images": ("IMAGE",),
490
+ "face_models": ("FACE_MODEL",),
491
+ }
492
+ }
493
+
494
+ RETURN_TYPES = ("FACE_MODEL",)
495
+ FUNCTION = "blend_faces"
496
+
497
+ OUTPUT_NODE = True
498
+
499
+ CATEGORY = "🌌 ReActor"
500
+
501
+ def build_face_model(self, image: Image.Image, det_size=(640, 640)):
502
+ logging.StreamHandler.terminator = "\n"
503
+ if image is None:
504
+ error_msg = "Please load an Image"
505
+ logger.error(error_msg)
506
+ return error_msg
507
+ image = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR)
508
+ face_model = analyze_faces(image, det_size)
509
+
510
+ if len(face_model) == 0:
511
+ print("")
512
+ det_size_half = half_det_size(det_size)
513
+ face_model = analyze_faces(image, det_size_half)
514
+ if face_model is not None and len(face_model) > 0:
515
+ print("...........................................................", end=" ")
516
+
517
+ if face_model is not None and len(face_model) > 0:
518
+ return face_model[0]
519
+ else:
520
+ no_face_msg = "No face found, please try another image"
521
+ # logger.error(no_face_msg)
522
+ return no_face_msg
523
+
524
+ def blend_faces(self, save_mode, send_only, face_model_name, compute_method, images=None, face_models=None):
525
+ global BLENDED_FACE_MODEL
526
+ blended_face: Face = BLENDED_FACE_MODEL
527
+
528
+ if send_only and blended_face is None:
529
+ send_only = False
530
+
531
+ if (images is not None or face_models is not None) and not send_only:
532
+
533
+ faces = []
534
+ embeddings = []
535
+
536
+ apply_patch(1)
537
+
538
+ if images is not None:
539
+ images_list: List[Image.Image] = batch_tensor_to_pil(images)
540
+
541
+ n = len(images_list)
542
+
543
+ for i,image in enumerate(images_list):
544
+ logging.StreamHandler.terminator = " "
545
+ logger.status(f"Building Face Model {i+1} of {n}...")
546
+ face = self.build_face_model(image)
547
+ if isinstance(face, str):
548
+ logger.error(f"No faces found in image {i+1}, skipping")
549
+ continue
550
+ else:
551
+ print(f"{int(((i+1)/n)*100)}%")
552
+ faces.append(face)
553
+ embeddings.append(face.embedding)
554
+
555
+ elif face_models is not None:
556
+
557
+ n = len(face_models)
558
+
559
+ for i,face_model in enumerate(face_models):
560
+ logging.StreamHandler.terminator = " "
561
+ logger.status(f"Extracting Face Model {i+1} of {n}...")
562
+ face = face_model
563
+ if isinstance(face, str):
564
+ logger.error(f"No faces found for face_model {i+1}, skipping")
565
+ continue
566
+ else:
567
+ print(f"{int(((i+1)/n)*100)}%")
568
+ faces.append(face)
569
+ embeddings.append(face.embedding)
570
+
571
+ logging.StreamHandler.terminator = "\n"
572
+ if len(faces) > 0:
573
+ # compute_method_name = "Mean" if compute_method == 0 else "Median" if compute_method == 1 else "Mode"
574
+ logger.status(f"Blending with Compute Method '{compute_method}'...")
575
+ blended_embedding = np.mean(embeddings, axis=0) if compute_method == "Mean" else np.median(embeddings, axis=0) if compute_method == "Median" else stats.mode(embeddings, axis=0)[0].astype(np.float32)
576
+ blended_face = Face(
577
+ bbox=faces[0].bbox,
578
+ kps=faces[0].kps,
579
+ det_score=faces[0].det_score,
580
+ landmark_3d_68=faces[0].landmark_3d_68,
581
+ pose=faces[0].pose,
582
+ landmark_2d_106=faces[0].landmark_2d_106,
583
+ embedding=blended_embedding,
584
+ gender=faces[0].gender,
585
+ age=faces[0].age
586
+ )
587
+ if blended_face is not None:
588
+ BLENDED_FACE_MODEL = blended_face
589
+ if save_mode:
590
+ face_model_path = os.path.join(FACE_MODELS_PATH, face_model_name + ".safetensors")
591
+ save_face_model(blended_face,face_model_path)
592
+ # done_msg = f"Face model has been saved to '{face_model_path}'"
593
+ # logger.status(done_msg)
594
+ logger.status("--Done!--")
595
+ # return (blended_face,)
596
+ else:
597
+ no_face_msg = "Something went wrong, please try another set of images"
598
+ logger.error(no_face_msg)
599
+ # return (blended_face,)
600
+ # logger.status("--Done!--")
601
+ if images is None and face_models is None:
602
+ logger.error("Please provide `images` or `face_models`")
603
+ return (blended_face,)
604
+
605
+
606
+ class SaveFaceModel:
607
+ def __init__(self):
608
+ self.output_dir = FACE_MODELS_PATH
609
+
610
+ @classmethod
611
+ def INPUT_TYPES(s):
612
+ return {
613
+ "required": {
614
+ "save_mode": ("BOOLEAN", {"default": True, "label_off": "OFF", "label_on": "ON"}),
615
+ "face_model_name": ("STRING", {"default": "default"}),
616
+ "select_face_index": ("INT", {"default": 0, "min": 0}),
617
+ },
618
+ "optional": {
619
+ "image": ("IMAGE",),
620
+ "face_model": ("FACE_MODEL",),
621
+ }
622
+ }
623
+
624
+ RETURN_TYPES = ()
625
+ FUNCTION = "save_model"
626
+
627
+ OUTPUT_NODE = True
628
+
629
+ CATEGORY = "🌌 ReActor"
630
+
631
+ def save_model(self, save_mode, face_model_name, select_face_index, image=None, face_model=None, det_size=(640, 640)):
632
+ if save_mode and image is not None:
633
+ source = tensor_to_pil(image)
634
+ source = cv2.cvtColor(np.array(source), cv2.COLOR_RGB2BGR)
635
+ apply_patch(1)
636
+ logger.status("Building Face Model...")
637
+ face_model_raw = analyze_faces(source, det_size)
638
+ if len(face_model_raw) == 0:
639
+ det_size_half = half_det_size(det_size)
640
+ face_model_raw = analyze_faces(source, det_size_half)
641
+ try:
642
+ face_model = face_model_raw[select_face_index]
643
+ except:
644
+ logger.error("No face(s) found")
645
+ return face_model_name
646
+ logger.status("--Done!--")
647
+ if save_mode and (face_model != "none" or face_model is not None):
648
+ face_model_path = os.path.join(self.output_dir, face_model_name + ".safetensors")
649
+ save_face_model(face_model,face_model_path)
650
+ if image is None and face_model is None:
651
+ logger.error("Please provide `face_model` or `image`")
652
+ return face_model_name
653
+
654
+
655
+ class RestoreFace:
656
+ @classmethod
657
+ def INPUT_TYPES(s):
658
+ return {
659
+ "required": {
660
+ "image": ("IMAGE",),
661
+ "facedetection": (["retinaface_resnet50", "retinaface_mobile0.25", "YOLOv5l", "YOLOv5n"],),
662
+ "model": (get_model_names(get_restorers),),
663
+ "visibility": ("FLOAT", {"default": 1, "min": 0.0, "max": 1, "step": 0.05}),
664
+ "codeformer_weight": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1, "step": 0.05}),
665
+ },
666
+ }
667
+
668
+ RETURN_TYPES = ("IMAGE",)
669
+ FUNCTION = "execute"
670
+ CATEGORY = "🌌 ReActor"
671
+
672
+ # def __init__(self):
673
+ # self.face_helper = None
674
+ # self.face_size = 512
675
+
676
+ def execute(self, image, model, visibility, codeformer_weight, facedetection):
677
+ result = reactor.restore_face(self,image,model,visibility,codeformer_weight,facedetection)
678
+ return (result,)
679
+
680
+
681
+ class MaskHelper:
682
+ def __init__(self):
683
+ # self.threshold = 0.5
684
+ # self.dilation = 10
685
+ # self.crop_factor = 3.0
686
+ # self.drop_size = 1
687
+ self.labels = "all"
688
+ self.detailer_hook = None
689
+ self.device_mode = "AUTO"
690
+ self.detection_hint = "center-1"
691
+ # self.sam_dilation = 0
692
+ # self.sam_threshold = 0.93
693
+ # self.bbox_expansion = 0
694
+ # self.mask_hint_threshold = 0.7
695
+ # self.mask_hint_use_negative = "False"
696
+ # self.force_resize_width = 0
697
+ # self.force_resize_height = 0
698
+ # self.resize_behavior = "source_size"
699
+
700
+ @classmethod
701
+ def INPUT_TYPES(s):
702
+ bboxs = ["bbox/"+x for x in folder_paths.get_filename_list("ultralytics_bbox")]
703
+ segms = ["segm/"+x for x in folder_paths.get_filename_list("ultralytics_segm")]
704
+ sam_models = [x for x in folder_paths.get_filename_list("sams") if 'hq' not in x]
705
+ return {
706
+ "required": {
707
+ "image": ("IMAGE",),
708
+ "swapped_image": ("IMAGE",),
709
+ "bbox_model_name": (bboxs + segms, ),
710
+ "bbox_threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
711
+ "bbox_dilation": ("INT", {"default": 10, "min": -512, "max": 512, "step": 1}),
712
+ "bbox_crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 100, "step": 0.1}),
713
+ "bbox_drop_size": ("INT", {"min": 1, "max": 8192, "step": 1, "default": 10}),
714
+ "sam_model_name": (sam_models, ),
715
+ "sam_dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}),
716
+ "sam_threshold": ("FLOAT", {"default": 0.93, "min": 0.0, "max": 1.0, "step": 0.01}),
717
+ "bbox_expansion": ("INT", {"default": 0, "min": 0, "max": 1000, "step": 1}),
718
+ "mask_hint_threshold": ("FLOAT", {"default": 0.7, "min": 0.0, "max": 1.0, "step": 0.01}),
719
+ "mask_hint_use_negative": (["False", "Small", "Outter"], ),
720
+ "morphology_operation": (["dilate", "erode", "open", "close"],),
721
+ "morphology_distance": ("INT", {"default": 0, "min": 0, "max": 128, "step": 1}),
722
+ "blur_radius": ("INT", {"default": 9, "min": 0, "max": 48, "step": 1}),
723
+ "sigma_factor": ("FLOAT", {"default": 1.0, "min": 0.01, "max": 3., "step": 0.01}),
724
+ },
725
+ "optional": {
726
+ "mask_optional": ("MASK",),
727
+ }
728
+ }
729
+
730
+ RETURN_TYPES = ("IMAGE","MASK","IMAGE","IMAGE")
731
+ RETURN_NAMES = ("IMAGE","MASK","MASK_PREVIEW","SWAPPED_FACE")
732
+ FUNCTION = "execute"
733
+ CATEGORY = "🌌 ReActor"
734
+
735
+ def execute(self, image, swapped_image, bbox_model_name, bbox_threshold, bbox_dilation, bbox_crop_factor, bbox_drop_size, sam_model_name, sam_dilation, sam_threshold, bbox_expansion, mask_hint_threshold, mask_hint_use_negative, morphology_operation, morphology_distance, blur_radius, sigma_factor, mask_optional=None):
736
+
737
+ # images = [image[i:i + 1, ...] for i in range(image.shape[0])]
738
+
739
+ images = image
740
+
741
+ if mask_optional is None:
742
+
743
+ bbox_model_path = folder_paths.get_full_path("ultralytics", bbox_model_name)
744
+ bbox_model = subcore.load_yolo(bbox_model_path)
745
+ bbox_detector = subcore.UltraBBoxDetector(bbox_model)
746
+
747
+ segs = bbox_detector.detect(images, bbox_threshold, bbox_dilation, bbox_crop_factor, bbox_drop_size, self.detailer_hook)
748
+
749
+ if isinstance(self.labels, list):
750
+ self.labels = str(self.labels[0])
751
+
752
+ if self.labels is not None and self.labels != '':
753
+ self.labels = self.labels.split(',')
754
+ if len(self.labels) > 0:
755
+ segs, _ = masking_segs.filter(segs, self.labels)
756
+ # segs, _ = masking_segs.filter(segs, "all")
757
+
758
+ sam_modelname = folder_paths.get_full_path("sams", sam_model_name)
759
+
760
+ if 'vit_h' in sam_model_name:
761
+ model_kind = 'vit_h'
762
+ elif 'vit_l' in sam_model_name:
763
+ model_kind = 'vit_l'
764
+ else:
765
+ model_kind = 'vit_b'
766
+
767
+ sam = sam_model_registry[model_kind](checkpoint=sam_modelname)
768
+ size = os.path.getsize(sam_modelname)
769
+ sam.safe_to = core.SafeToGPU(size)
770
+
771
+ device = model_management.get_torch_device()
772
+
773
+ sam.safe_to.to_device(sam, device)
774
+
775
+ sam.is_auto_mode = self.device_mode == "AUTO"
776
+
777
+ combined_mask, _ = core.make_sam_mask_segmented(sam, segs, images, self.detection_hint, sam_dilation, sam_threshold, bbox_expansion, mask_hint_threshold, mask_hint_use_negative)
778
+
779
+ else:
780
+ combined_mask = mask_optional
781
+
782
+ # *** MASK TO IMAGE ***:
783
+
784
+ mask_image = combined_mask.reshape((-1, 1, combined_mask.shape[-2], combined_mask.shape[-1])).movedim(1, -1).expand(-1, -1, -1, 3)
785
+
786
+ # *** MASK MORPH ***:
787
+
788
+ mask_image = core.tensor2mask(mask_image)
789
+
790
+ if morphology_operation == "dilate":
791
+ mask_image = self.dilate(mask_image, morphology_distance)
792
+ elif morphology_operation == "erode":
793
+ mask_image = self.erode(mask_image, morphology_distance)
794
+ elif morphology_operation == "open":
795
+ mask_image = self.erode(mask_image, morphology_distance)
796
+ mask_image = self.dilate(mask_image, morphology_distance)
797
+ elif morphology_operation == "close":
798
+ mask_image = self.dilate(mask_image, morphology_distance)
799
+ mask_image = self.erode(mask_image, morphology_distance)
800
+
801
+ # *** MASK BLUR ***:
802
+
803
+ if len(mask_image.size()) == 3:
804
+ mask_image = mask_image.unsqueeze(3)
805
+
806
+ mask_image = mask_image.permute(0, 3, 1, 2)
807
+ kernel_size = blur_radius * 2 + 1
808
+ sigma = sigma_factor * (0.6 * blur_radius - 0.3)
809
+ mask_image_final = self.gaussian_blur(mask_image, kernel_size, sigma).permute(0, 2, 3, 1)
810
+ if mask_image_final.size()[3] == 1:
811
+ mask_image_final = mask_image_final[:, :, :, 0]
812
+
813
+ # *** CUT BY MASK ***:
814
+
815
+ if len(swapped_image.shape) < 4:
816
+ C = 1
817
+ else:
818
+ C = swapped_image.shape[3]
819
+
820
+ # We operate on RGBA to keep the code clean and then convert back after
821
+ swapped_image = core.tensor2rgba(swapped_image)
822
+ mask = core.tensor2mask(mask_image_final)
823
+
824
+ # Scale the mask to be a matching size if it isn't
825
+ B, H, W, _ = swapped_image.shape
826
+ mask = torch.nn.functional.interpolate(mask.unsqueeze(1), size=(H, W), mode='nearest')[:,0,:,:]
827
+ MB, _, _ = mask.shape
828
+
829
+ if MB < B:
830
+ assert(B % MB == 0)
831
+ mask = mask.repeat(B // MB, 1, 1)
832
+
833
+ # masks_to_boxes errors if the tensor is all zeros, so we'll add a single pixel and zero it out at the end
834
+ is_empty = ~torch.gt(torch.max(torch.reshape(mask,[MB, H * W]), dim=1).values, 0.)
835
+ mask[is_empty,0,0] = 1.
836
+ boxes = masks_to_boxes(mask)
837
+ mask[is_empty,0,0] = 0.
838
+
839
+ min_x = boxes[:,0]
840
+ min_y = boxes[:,1]
841
+ max_x = boxes[:,2]
842
+ max_y = boxes[:,3]
843
+
844
+ width = max_x - min_x + 1
845
+ height = max_y - min_y + 1
846
+
847
+ use_width = int(torch.max(width).item())
848
+ use_height = int(torch.max(height).item())
849
+
850
+ # if self.force_resize_width > 0:
851
+ # use_width = self.force_resize_width
852
+
853
+ # if self.force_resize_height > 0:
854
+ # use_height = self.force_resize_height
855
+
856
+ alpha_mask = torch.ones((B, H, W, 4))
857
+ alpha_mask[:,:,:,3] = mask
858
+
859
+ swapped_image = swapped_image * alpha_mask
860
+
861
+ cutted_image = torch.zeros((B, use_height, use_width, 4))
862
+ for i in range(0, B):
863
+ if not is_empty[i]:
864
+ ymin = int(min_y[i].item())
865
+ ymax = int(max_y[i].item())
866
+ xmin = int(min_x[i].item())
867
+ xmax = int(max_x[i].item())
868
+ single = (swapped_image[i, ymin:ymax+1, xmin:xmax+1,:]).unsqueeze(0)
869
+ resized = torch.nn.functional.interpolate(single.permute(0, 3, 1, 2), size=(use_height, use_width), mode='bicubic').permute(0, 2, 3, 1)
870
+ cutted_image[i] = resized[0]
871
+
872
+ # Preserve our type unless we were previously RGB and added non-opaque alpha due to the mask size
873
+ if C == 1:
874
+ cutted_image = core.tensor2mask(cutted_image)
875
+ elif C == 3 and torch.min(cutted_image[:,:,:,3]) == 1:
876
+ cutted_image = core.tensor2rgb(cutted_image)
877
+
878
+ # *** PASTE BY MASK ***:
879
+
880
+ image_base = core.tensor2rgba(images)
881
+ image_to_paste = core.tensor2rgba(cutted_image)
882
+ mask = core.tensor2mask(mask_image_final)
883
+
884
+ # Scale the mask to be a matching size if it isn't
885
+ B, H, W, C = image_base.shape
886
+ MB = mask.shape[0]
887
+ PB = image_to_paste.shape[0]
888
+
889
+ if B < PB:
890
+ assert(PB % B == 0)
891
+ image_base = image_base.repeat(PB // B, 1, 1, 1)
892
+ B, H, W, C = image_base.shape
893
+ if MB < B:
894
+ assert(B % MB == 0)
895
+ mask = mask.repeat(B // MB, 1, 1)
896
+ elif B < MB:
897
+ assert(MB % B == 0)
898
+ image_base = image_base.repeat(MB // B, 1, 1, 1)
899
+ if PB < B:
900
+ assert(B % PB == 0)
901
+ image_to_paste = image_to_paste.repeat(B // PB, 1, 1, 1)
902
+
903
+ mask = torch.nn.functional.interpolate(mask.unsqueeze(1), size=(H, W), mode='nearest')[:,0,:,:]
904
+ MB, MH, MW = mask.shape
905
+
906
+ # masks_to_boxes errors if the tensor is all zeros, so we'll add a single pixel and zero it out at the end
907
+ is_empty = ~torch.gt(torch.max(torch.reshape(mask,[MB, MH * MW]), dim=1).values, 0.)
908
+ mask[is_empty,0,0] = 1.
909
+ boxes = masks_to_boxes(mask)
910
+ mask[is_empty,0,0] = 0.
911
+
912
+ min_x = boxes[:,0]
913
+ min_y = boxes[:,1]
914
+ max_x = boxes[:,2]
915
+ max_y = boxes[:,3]
916
+ mid_x = (min_x + max_x) / 2
917
+ mid_y = (min_y + max_y) / 2
918
+
919
+ target_width = max_x - min_x + 1
920
+ target_height = max_y - min_y + 1
921
+
922
+ result = image_base.detach().clone()
923
+ face_segment = mask_image_final
924
+
925
+ for i in range(0, MB):
926
+ if is_empty[i]:
927
+ continue
928
+ else:
929
+ image_index = i
930
+ source_size = image_to_paste.size()
931
+ SB, SH, SW, _ = image_to_paste.shape
932
+
933
+ # Figure out the desired size
934
+ width = int(target_width[i].item())
935
+ height = int(target_height[i].item())
936
+ # if self.resize_behavior == "keep_ratio_fill":
937
+ # target_ratio = width / height
938
+ # actual_ratio = SW / SH
939
+ # if actual_ratio > target_ratio:
940
+ # width = int(height * actual_ratio)
941
+ # elif actual_ratio < target_ratio:
942
+ # height = int(width / actual_ratio)
943
+ # elif self.resize_behavior == "keep_ratio_fit":
944
+ # target_ratio = width / height
945
+ # actual_ratio = SW / SH
946
+ # if actual_ratio > target_ratio:
947
+ # height = int(width / actual_ratio)
948
+ # elif actual_ratio < target_ratio:
949
+ # width = int(height * actual_ratio)
950
+ # elif self.resize_behavior == "source_size" or self.resize_behavior == "source_size_unmasked":
951
+
952
+ width = SW
953
+ height = SH
954
+
955
+ # Resize the image we're pasting if needed
956
+ resized_image = image_to_paste[i].unsqueeze(0)
957
+ # if SH != height or SW != width:
958
+ # resized_image = torch.nn.functional.interpolate(resized_image.permute(0, 3, 1, 2), size=(height,width), mode='bicubic').permute(0, 2, 3, 1)
959
+
960
+ pasting = torch.ones([H, W, C])
961
+ ymid = float(mid_y[i].item())
962
+ ymin = int(math.floor(ymid - height / 2)) + 1
963
+ ymax = int(math.floor(ymid + height / 2)) + 1
964
+ xmid = float(mid_x[i].item())
965
+ xmin = int(math.floor(xmid - width / 2)) + 1
966
+ xmax = int(math.floor(xmid + width / 2)) + 1
967
+
968
+ _, source_ymax, source_xmax, _ = resized_image.shape
969
+ source_ymin, source_xmin = 0, 0
970
+
971
+ if xmin < 0:
972
+ source_xmin = abs(xmin)
973
+ xmin = 0
974
+ if ymin < 0:
975
+ source_ymin = abs(ymin)
976
+ ymin = 0
977
+ if xmax > W:
978
+ source_xmax -= (xmax - W)
979
+ xmax = W
980
+ if ymax > H:
981
+ source_ymax -= (ymax - H)
982
+ ymax = H
983
+
984
+ pasting[ymin:ymax, xmin:xmax, :] = resized_image[0, source_ymin:source_ymax, source_xmin:source_xmax, :]
985
+ pasting[:, :, 3] = 1.
986
+
987
+ pasting_alpha = torch.zeros([H, W])
988
+ pasting_alpha[ymin:ymax, xmin:xmax] = resized_image[0, source_ymin:source_ymax, source_xmin:source_xmax, 3]
989
+
990
+ # if self.resize_behavior == "keep_ratio_fill" or self.resize_behavior == "source_size_unmasked":
991
+ # # If we explicitly want to fill the area, we are ok with extending outside
992
+ # paste_mask = pasting_alpha.unsqueeze(2).repeat(1, 1, 4)
993
+ # else:
994
+ # paste_mask = torch.min(pasting_alpha, mask[i]).unsqueeze(2).repeat(1, 1, 4)
995
+ paste_mask = torch.min(pasting_alpha, mask[i]).unsqueeze(2).repeat(1, 1, 4)
996
+ result[image_index] = pasting * paste_mask + result[image_index] * (1. - paste_mask)
997
+
998
+ face_segment = result
999
+
1000
+ face_segment[...,3] = mask[i]
1001
+
1002
+ result = rgba2rgb_tensor(result)
1003
+
1004
+ return (result,combined_mask,mask_image_final,face_segment,)
1005
+
1006
+ def gaussian_blur(self, image, kernel_size, sigma):
1007
+ kernel = torch.Tensor(kernel_size, kernel_size).to(device=image.device)
1008
+ center = kernel_size // 2
1009
+ variance = sigma**2
1010
+ for i in range(kernel_size):
1011
+ for j in range(kernel_size):
1012
+ x = i - center
1013
+ y = j - center
1014
+ kernel[i, j] = math.exp(-(x**2 + y**2)/(2*variance))
1015
+ kernel /= kernel.sum()
1016
+
1017
+ # Pad the input tensor
1018
+ padding = (kernel_size - 1) // 2
1019
+ input_pad = torch.nn.functional.pad(image, (padding, padding, padding, padding), mode='reflect')
1020
+
1021
+ # Reshape the padded input tensor for batched convolution
1022
+ batch_size, num_channels, height, width = image.shape
1023
+ input_reshaped = input_pad.reshape(batch_size*num_channels, 1, height+padding*2, width+padding*2)
1024
+
1025
+ # Perform batched convolution with the Gaussian kernel
1026
+ output_reshaped = torch.nn.functional.conv2d(input_reshaped, kernel.unsqueeze(0).unsqueeze(0))
1027
+
1028
+ # Reshape the output tensor to its original shape
1029
+ output_tensor = output_reshaped.reshape(batch_size, num_channels, height, width)
1030
+
1031
+ return output_tensor
1032
+
1033
+ def erode(self, image, distance):
1034
+ return 1. - self.dilate(1. - image, distance)
1035
+
1036
+ def dilate(self, image, distance):
1037
+ kernel_size = 1 + distance * 2
1038
+ # Add the channels dimension
1039
+ image = image.unsqueeze(1)
1040
+ out = torchfn.max_pool2d(image, kernel_size=kernel_size, stride=1, padding=kernel_size // 2).squeeze(1)
1041
+ return out
1042
+
1043
+
1044
+ class ImageDublicator:
1045
+ @classmethod
1046
+ def INPUT_TYPES(s):
1047
+ return {
1048
+ "required": {
1049
+ "image": ("IMAGE",),
1050
+ "count": ("INT", {"default": 1, "min": 0}),
1051
+ },
1052
+ }
1053
+
1054
+ RETURN_TYPES = ("IMAGE",)
1055
+ RETURN_NAMES = ("IMAGES",)
1056
+ OUTPUT_IS_LIST = (True,)
1057
+ FUNCTION = "execute"
1058
+ CATEGORY = "🌌 ReActor"
1059
+
1060
+ def execute(self, image, count):
1061
+ images = [image for i in range(count)]
1062
+ return (images,)
1063
+
1064
+
1065
+ class ImageRGBA2RGB:
1066
+ @classmethod
1067
+ def INPUT_TYPES(s):
1068
+ return {
1069
+ "required": {
1070
+ "image": ("IMAGE",),
1071
+ },
1072
+ }
1073
+
1074
+ RETURN_TYPES = ("IMAGE",)
1075
+ FUNCTION = "execute"
1076
+ CATEGORY = "🌌 ReActor"
1077
+
1078
+ def execute(self, image):
1079
+ out = rgba2rgb_tensor(image)
1080
+ return (out,)
1081
+
1082
+
1083
+ class MakeFaceModelBatch:
1084
+ @classmethod
1085
+ def INPUT_TYPES(s):
1086
+ return {
1087
+ "required": {
1088
+ "face_model1": ("FACE_MODEL",),
1089
+ },
1090
+ "optional": {
1091
+ "face_model2": ("FACE_MODEL",),
1092
+ "face_model3": ("FACE_MODEL",),
1093
+ "face_model4": ("FACE_MODEL",),
1094
+ "face_model5": ("FACE_MODEL",),
1095
+ "face_model6": ("FACE_MODEL",),
1096
+ "face_model7": ("FACE_MODEL",),
1097
+ "face_model8": ("FACE_MODEL",),
1098
+ "face_model9": ("FACE_MODEL",),
1099
+ "face_model10": ("FACE_MODEL",),
1100
+ },
1101
+ }
1102
+
1103
+ RETURN_TYPES = ("FACE_MODEL",)
1104
+ RETURN_NAMES = ("FACE_MODELS",)
1105
+ FUNCTION = "execute"
1106
+
1107
+ CATEGORY = "🌌 ReActor"
1108
+
1109
+ def execute(self, **kwargs):
1110
+ if len(kwargs) > 0:
1111
+ face_models = [value for value in kwargs.values()]
1112
+ return (face_models,)
1113
+ else:
1114
+ logger.error("Please provide at least 1 `face_model`")
1115
+ return (None,)
1116
+
1117
+
1118
+ class ReActorOptions:
1119
+ @classmethod
1120
+ def INPUT_TYPES(s):
1121
+ return {
1122
+ "required": {
1123
+ "input_faces_order": (
1124
+ ["left-right","right-left","top-bottom","bottom-top","small-large","large-small"], {"default": "large-small"}
1125
+ ),
1126
+ "input_faces_index": ("STRING", {"default": "0"}),
1127
+ "detect_gender_input": (["no","female","male"], {"default": "no"}),
1128
+ "source_faces_order": (
1129
+ ["left-right","right-left","top-bottom","bottom-top","small-large","large-small"], {"default": "large-small"}
1130
+ ),
1131
+ "source_faces_index": ("STRING", {"default": "0"}),
1132
+ "detect_gender_source": (["no","female","male"], {"default": "no"}),
1133
+ "console_log_level": ([0, 1, 2], {"default": 1}),
1134
+ }
1135
+ }
1136
+
1137
+ RETURN_TYPES = ("OPTIONS",)
1138
+ FUNCTION = "execute"
1139
+ CATEGORY = "🌌 ReActor"
1140
+
1141
+ def execute(self,input_faces_order, input_faces_index, detect_gender_input, source_faces_order, source_faces_index, detect_gender_source, console_log_level):
1142
+ options: dict = {
1143
+ "input_faces_order": input_faces_order,
1144
+ "input_faces_index": input_faces_index,
1145
+ "detect_gender_input": detect_gender_input,
1146
+ "source_faces_order": source_faces_order,
1147
+ "source_faces_index": source_faces_index,
1148
+ "detect_gender_source": detect_gender_source,
1149
+ "console_log_level": console_log_level,
1150
+ }
1151
+ return (options, )
1152
+
1153
+
1154
+ class ReActorFaceBoost:
1155
+ @classmethod
1156
+ def INPUT_TYPES(s):
1157
+ return {
1158
+ "required": {
1159
+ "enabled": ("BOOLEAN", {"default": True, "label_off": "OFF", "label_on": "ON"}),
1160
+ "boost_model": (get_model_names(get_restorers),),
1161
+ "interpolation": (["Nearest","Bilinear","Bicubic","Lanczos"], {"default": "Bicubic"}),
1162
+ "visibility": ("FLOAT", {"default": 1, "min": 0.1, "max": 1, "step": 0.05}),
1163
+ "codeformer_weight": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1, "step": 0.05}),
1164
+ "restore_with_main_after": ("BOOLEAN", {"default": False}),
1165
+ }
1166
+ }
1167
+
1168
+ RETURN_TYPES = ("FACE_BOOST",)
1169
+ FUNCTION = "execute"
1170
+ CATEGORY = "🌌 ReActor"
1171
+
1172
+ def execute(self,enabled,boost_model,interpolation,visibility,codeformer_weight,restore_with_main_after):
1173
+ face_boost: dict = {
1174
+ "enabled": enabled,
1175
+ "boost_model": boost_model,
1176
+ "interpolation": interpolation,
1177
+ "visibility": visibility,
1178
+ "codeformer_weight": codeformer_weight,
1179
+ "restore_with_main_after": restore_with_main_after,
1180
+ }
1181
+ return (face_boost, )
1182
+
1183
+ class ReActorUnload:
1184
+ @classmethod
1185
+ def INPUT_TYPES(s):
1186
+ return {
1187
+ "required": {
1188
+ "trigger": ("IMAGE", ),
1189
+ },
1190
+ }
1191
+
1192
+ RETURN_TYPES = ("IMAGE",)
1193
+ FUNCTION = "execute"
1194
+ CATEGORY = "🌌 ReActor"
1195
+
1196
+ def execute(self, trigger):
1197
+ unload_all_models()
1198
+ return (trigger,)
1199
+
1200
+
1201
+ NODE_CLASS_MAPPINGS = {
1202
+ # --- MAIN NODES ---
1203
+ "ReActorFaceSwap": reactor,
1204
+ "ReActorFaceSwapOpt": ReActorPlusOpt,
1205
+ "ReActorOptions": ReActorOptions,
1206
+ "ReActorFaceBoost": ReActorFaceBoost,
1207
+ "ReActorMaskHelper": MaskHelper,
1208
+ # --- Operations with Face Models ---
1209
+ "ReActorSaveFaceModel": SaveFaceModel,
1210
+ "ReActorLoadFaceModel": LoadFaceModel,
1211
+ "ReActorBuildFaceModel": BuildFaceModel,
1212
+ "ReActorMakeFaceModelBatch": MakeFaceModelBatch,
1213
+ # --- Additional Nodes ---
1214
+ "ReActorRestoreFace": RestoreFace,
1215
+ "ReActorImageDublicator": ImageDublicator,
1216
+ "ImageRGBA2RGB": ImageRGBA2RGB,
1217
+ "ReActorUnload": ReActorUnload,
1218
+ }
1219
+
1220
+ NODE_DISPLAY_NAME_MAPPINGS = {
1221
+ # --- MAIN NODES ---
1222
+ "ReActorFaceSwap": "ReActor 🌌 Fast Face Swap",
1223
+ "ReActorFaceSwapOpt": "ReActor 🌌 Fast Face Swap [OPTIONS]",
1224
+ "ReActorOptions": "ReActor 🌌 Options",
1225
+ "ReActorFaceBoost": "ReActor 🌌 Face Booster",
1226
+ "ReActorMaskHelper": "ReActor 🌌 Masking Helper",
1227
+ # --- Operations with Face Models ---
1228
+ "ReActorSaveFaceModel": "Save Face Model 🌌 ReActor",
1229
+ "ReActorLoadFaceModel": "Load Face Model 🌌 ReActor",
1230
+ "ReActorBuildFaceModel": "Build Blended Face Model 🌌 ReActor",
1231
+ "ReActorMakeFaceModelBatch": "Make Face Model Batch 🌌 ReActor",
1232
+ # --- Additional Nodes ---
1233
+ "ReActorRestoreFace": "Restore Face 🌌 ReActor",
1234
+ "ReActorImageDublicator": "Image Dublicator (List) 🌌 ReActor",
1235
+ "ImageRGBA2RGB": "Convert RGBA to RGB 🌌 ReActor",
1236
+ "ReActorUnload": "Unload ReActor Models 🌌 ReActor",
1237
+ }
reactor_utils.py ADDED
@@ -0,0 +1,231 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from PIL import Image
3
+ import numpy as np
4
+ import torch
5
+ from torchvision.utils import make_grid
6
+ import cv2
7
+ import math
8
+ import logging
9
+ import hashlib
10
+ from insightface.app.common import Face
11
+ from safetensors.torch import save_file, safe_open
12
+ from tqdm import tqdm
13
+ import urllib.request
14
+ import onnxruntime
15
+ from typing import Any
16
+ import folder_paths
17
+
18
+ ORT_SESSION = None
19
+
20
+ def tensor_to_pil(img_tensor, batch_index=0):
21
+ # Convert tensor of shape [batch_size, channels, height, width] at the batch_index to PIL Image
22
+ img_tensor = img_tensor[batch_index].unsqueeze(0)
23
+ i = 255. * img_tensor.cpu().numpy()
24
+ img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8).squeeze())
25
+ return img
26
+
27
+
28
+ def batch_tensor_to_pil(img_tensor):
29
+ # Convert tensor of shape [batch_size, channels, height, width] to a list of PIL Images
30
+ return [tensor_to_pil(img_tensor, i) for i in range(img_tensor.shape[0])]
31
+
32
+
33
+ def pil_to_tensor(image):
34
+ # Takes a PIL image and returns a tensor of shape [1, height, width, channels]
35
+ image = np.array(image).astype(np.float32) / 255.0
36
+ image = torch.from_numpy(image).unsqueeze(0)
37
+ if len(image.shape) == 3: # If the image is grayscale, add a channel dimension
38
+ image = image.unsqueeze(-1)
39
+ return image
40
+
41
+
42
+ def batched_pil_to_tensor(images):
43
+ # Takes a list of PIL images and returns a tensor of shape [batch_size, height, width, channels]
44
+ return torch.cat([pil_to_tensor(image) for image in images], dim=0)
45
+
46
+
47
+ def img2tensor(imgs, bgr2rgb=True, float32=True):
48
+
49
+ def _totensor(img, bgr2rgb, float32):
50
+ if img.shape[2] == 3 and bgr2rgb:
51
+ if img.dtype == 'float64':
52
+ img = img.astype('float32')
53
+ img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
54
+ img = torch.from_numpy(img.transpose(2, 0, 1))
55
+ if float32:
56
+ img = img.float()
57
+ return img
58
+
59
+ if isinstance(imgs, list):
60
+ return [_totensor(img, bgr2rgb, float32) for img in imgs]
61
+ else:
62
+ return _totensor(imgs, bgr2rgb, float32)
63
+
64
+
65
+ def tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1)):
66
+
67
+ if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))):
68
+ raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}')
69
+
70
+ if torch.is_tensor(tensor):
71
+ tensor = [tensor]
72
+ result = []
73
+ for _tensor in tensor:
74
+ _tensor = _tensor.squeeze(0).float().detach().cpu().clamp_(*min_max)
75
+ _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0])
76
+
77
+ n_dim = _tensor.dim()
78
+ if n_dim == 4:
79
+ img_np = make_grid(_tensor, nrow=int(math.sqrt(_tensor.size(0))), normalize=False).numpy()
80
+ img_np = img_np.transpose(1, 2, 0)
81
+ if rgb2bgr:
82
+ img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR)
83
+ elif n_dim == 3:
84
+ img_np = _tensor.numpy()
85
+ img_np = img_np.transpose(1, 2, 0)
86
+ if img_np.shape[2] == 1: # gray image
87
+ img_np = np.squeeze(img_np, axis=2)
88
+ else:
89
+ if rgb2bgr:
90
+ img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR)
91
+ elif n_dim == 2:
92
+ img_np = _tensor.numpy()
93
+ else:
94
+ raise TypeError('Only support 4D, 3D or 2D tensor. ' f'But received with dimension: {n_dim}')
95
+ if out_type == np.uint8:
96
+ # Unlike MATLAB, numpy.unit8() WILL NOT round by default.
97
+ img_np = (img_np * 255.0).round()
98
+ img_np = img_np.astype(out_type)
99
+ result.append(img_np)
100
+ if len(result) == 1:
101
+ result = result[0]
102
+ return result
103
+
104
+
105
+ def rgba2rgb_tensor(rgba):
106
+ r = rgba[...,0]
107
+ g = rgba[...,1]
108
+ b = rgba[...,2]
109
+ return torch.stack([r, g, b], dim=3)
110
+
111
+
112
+ def download(url, path, name):
113
+ request = urllib.request.urlopen(url)
114
+ total = int(request.headers.get('Content-Length', 0))
115
+ with tqdm(total=total, desc=f'[ReActor] Downloading {name} to {path}', unit='B', unit_scale=True, unit_divisor=1024) as progress:
116
+ urllib.request.urlretrieve(url, path, reporthook=lambda count, block_size, total_size: progress.update(block_size))
117
+
118
+
119
+ def move_path(old_path, new_path):
120
+ if os.path.exists(old_path):
121
+ try:
122
+ models = os.listdir(old_path)
123
+ for model in models:
124
+ move_old_path = os.path.join(old_path, model)
125
+ move_new_path = os.path.join(new_path, model)
126
+ os.rename(move_old_path, move_new_path)
127
+ os.rmdir(old_path)
128
+ except Exception as e:
129
+ print(f"Error: {e}")
130
+ new_path = old_path
131
+
132
+
133
+ def addLoggingLevel(levelName, levelNum, methodName=None):
134
+ if not methodName:
135
+ methodName = levelName.lower()
136
+
137
+ def logForLevel(self, message, *args, **kwargs):
138
+ if self.isEnabledFor(levelNum):
139
+ self._log(levelNum, message, args, **kwargs)
140
+
141
+ def logToRoot(message, *args, **kwargs):
142
+ logging.log(levelNum, message, *args, **kwargs)
143
+
144
+ logging.addLevelName(levelNum, levelName)
145
+ setattr(logging, levelName, levelNum)
146
+ setattr(logging.getLoggerClass(), methodName, logForLevel)
147
+ setattr(logging, methodName, logToRoot)
148
+
149
+
150
+ def get_image_md5hash(image: Image.Image):
151
+ md5hash = hashlib.md5(image.tobytes())
152
+ return md5hash.hexdigest()
153
+
154
+
155
+ def save_face_model(face: Face, filename: str) -> None:
156
+ try:
157
+ tensors = {
158
+ "bbox": torch.tensor(face["bbox"]),
159
+ "kps": torch.tensor(face["kps"]),
160
+ "det_score": torch.tensor(face["det_score"]),
161
+ "landmark_3d_68": torch.tensor(face["landmark_3d_68"]),
162
+ "pose": torch.tensor(face["pose"]),
163
+ "landmark_2d_106": torch.tensor(face["landmark_2d_106"]),
164
+ "embedding": torch.tensor(face["embedding"]),
165
+ "gender": torch.tensor(face["gender"]),
166
+ "age": torch.tensor(face["age"]),
167
+ }
168
+ save_file(tensors, filename)
169
+ print(f"Face model has been saved to '{filename}'")
170
+ except Exception as e:
171
+ print(f"Error: {e}")
172
+
173
+
174
+ def load_face_model(filename: str):
175
+ face = {}
176
+ with safe_open(filename, framework="pt") as f:
177
+ for k in f.keys():
178
+ face[k] = f.get_tensor(k).numpy()
179
+ return Face(face)
180
+
181
+
182
+ def get_ort_session():
183
+ global ORT_SESSION
184
+ return ORT_SESSION
185
+
186
+ def set_ort_session(model_path, providers) -> Any:
187
+ global ORT_SESSION
188
+ onnxruntime.set_default_logger_severity(3)
189
+ ORT_SESSION = onnxruntime.InferenceSession(model_path, providers=providers)
190
+ return ORT_SESSION
191
+
192
+ def clear_ort_session() -> None:
193
+ global ORT_SESSION
194
+ ORT_SESSION = None
195
+
196
+ def prepare_cropped_face(cropped_face):
197
+ cropped_face = cropped_face[:, :, ::-1] / 255.0
198
+ cropped_face = (cropped_face - 0.5) / 0.5
199
+ cropped_face = np.expand_dims(cropped_face.transpose(2, 0, 1), axis = 0).astype(np.float32)
200
+ return cropped_face
201
+
202
+ def normalize_cropped_face(cropped_face):
203
+ cropped_face = np.clip(cropped_face, -1, 1)
204
+ cropped_face = (cropped_face + 1) / 2
205
+ cropped_face = cropped_face.transpose(1, 2, 0)
206
+ cropped_face = (cropped_face * 255.0).round()
207
+ cropped_face = cropped_face.astype(np.uint8)[:, :, ::-1]
208
+ return cropped_face
209
+
210
+
211
+ # author: Trung0246 --->
212
+ def add_folder_path_and_extensions(folder_name, full_folder_paths, extensions):
213
+ # Iterate over the list of full folder paths
214
+ for full_folder_path in full_folder_paths:
215
+ # Use the provided function to add each model folder path
216
+ folder_paths.add_model_folder_path(folder_name, full_folder_path)
217
+
218
+ # Now handle the extensions. If the folder name already exists, update the extensions
219
+ if folder_name in folder_paths.folder_names_and_paths:
220
+ # Unpack the current paths and extensions
221
+ current_paths, current_extensions = folder_paths.folder_names_and_paths[folder_name]
222
+ # Update the extensions set with the new extensions
223
+ updated_extensions = current_extensions | extensions
224
+ # Reassign the updated tuple back to the dictionary
225
+ folder_paths.folder_names_and_paths[folder_name] = (current_paths, updated_extensions)
226
+ else:
227
+ # If the folder name was not present, add_model_folder_path would have added it with the last path
228
+ # Now we just need to update the set of extensions as it would be an empty set
229
+ # Also ensure that all paths are included (since add_model_folder_path adds only one path at a time)
230
+ folder_paths.folder_names_and_paths[folder_name] = (full_folder_paths, extensions)
231
+ # <---
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ albumentations>=1.4.16
2
+ insightface==0.7.3
3
+ onnx>=1.14.0
4
+ opencv-python>=4.7.0.72
5
+ numpy==1.26.3
6
+ segment_anything
7
+ ultralytics