Calculate Sample Size Needed to - Power and Sample Size

sample size calculator using hazard ratio

sample size calculator using hazard ratio - win

Lupine Publishers | Kinetic Isotherm Studies of Azo Dyes by Metallic Oxide Nanoparticles Adsorbent

Lupine Publishers | Kinetic Isotherm Studies of Azo Dyes by Metallic Oxide Nanoparticles Adsorbent

Lupine Publishers | An archive of organic and inorganic chemical sciences
Abstract
We reported the synthesis of Cu4O3 nanoparticles fabricated by Camellia Sinensis (green tea) leaves extract as reducing and stabilizing agent and studied the azo dyes removal efficiency. The formation of copper oxide nanoparticles was confirmed after change in solution of salt and plant extract from green to pale yellow. Subsequently, the above said nanoparticles were characterized by SEM, XRD, FTIR, and UV spectrophotometer for size and morphology. The average particle size of copper oxide nanoparticle was found to be 17.26nm by XRD shrerrer equation, average grain diameter by SEM was calculated 8.5×10-2mm with spherical and oval shaped. UV spectroscopy range was between 200-400nm. These copper oxide nanoparticles were applied as azo dyes (Congo red and malachite green) degradation. Effect of reaction parameters were studied for optimum conditions. Kinetic models like Langmuir, Freundlich and elovich models were applied. Finally, concluded that these particles are effective degradation potential of azo dyes at about 70-75% from aqueous solution.
Keywords: Green Tea; Cu4O3; Green synthesis; XRD; Congored; Malachite Green
Background
With elevating improvement in technology, the Scientific developments are approaching to new horizons [1]. Besides supplementary needs, the stipulation of industrial wastewater has increased swiftly, supervened in the huge amount of wastewater including azo dyes. Azo dyes are the foremost group of commercial pollutants [2]. Azo dyes are class of synthetic dyes with a complex aromatic structure and contain two adjacent nitrogen bond (N=N), that can accompany color to materials [3]. Furthermore, the aromatic structures of dyes form them sturdy and not- biodegrade [4]. Textile consume prodigious quantities of hazardous chemicals particularly in dyeing operations. This work is constructed on malachite green and congored azo dyes. The toxic Habit of the azo dyes can be elaborated by fact that upon decomposition it breaks up into hazardous products [5]. The MG and CR azo dyes toxic dye which has been removed from water samples through the physical, chemical and biological methods. Azo dyes are toxic, probably cause aesthetic problems and mutagenic and carcinogenic effects on human health, so must be degraded [6]. Therefore, the adsorption method by using copper oxide metal nanoparticles for wastewater treatment comprised with azo dyes. Cu4O3 nanoparticle were applied as an adsorbent for the degradation of MG and CR dyes and its kinetic and isotherm studies. Biogenic technology is regarded an emerging advancement of the current time which has been utilized to synthesize nanoparticles of a desired shape and size by using plant extract [7]. Consequently, the synthesized nanoparticles using innovative techniques which is used as cost-friendly reagent and less reactive. The work symbolizes application of conventional physical and also chemical methods for decolorization of azo dyes. physical method includes osmosis, filtration, adsorption and flocculation. the chemical method (oxidation, electrolysis) and biological method (microorganism, enzymes) are also applicable [8]. Green technology deals with the manipulation of matter at size typically b/w 1-100nm range. Nanoparticles having high surface to volume ratio responsible for enhanced properties [9]. Specific area is appropriate for adsorption property and other relevant properties such as dye removal [10].
Azo dye normally has aromatic structure and N=N bond that’s why they are hardly biodegradable [11,12]. These dyes have also mutagenic and carcinogenic effect. Normally, conventional methods have considerably less potential of degradation. Copper oxide nanoparticles have efficient power of dyes removal [12-17]. Most probably, copper oxide are low cost and novel adsorbent of azodyes. Copper oxide nanoparticle has efficiency of azo dyes removal from wastewater [12]. Malachite green dye (C23H25N2 with molar mass364.911g/mol) is organic in nature. Its lethal dose is 80mg/kg the structure of malachite green dye is in Figure 1 below. Congo red an azo dye is sodium salt of 3,3′-bis structure. Congo red dye is water soluble, its solubility is enhanced in organic solvents. Its molecular formula is C32H22N6Na2O6S2 with molar mass of 696.665 g/mol [13- 14]. The structure is given below Figure 2. The Camellia synesis is evergreen small tree. The Camellia synesis leaves act as capping and reducing agent during the synthesis of metal nanoparticle. There are certain properties of green tea extract such as antitumor, antioxidant, anticoagulant, antiviral, blood pressure and lowering activity [18-22] (Figure 3). Plant extract has some chemicals like phenols, acid, vitamins, responsible for reduction of metal [23]. Camellia synesis leaves have polyphenols, catechins (ECG), OH groups which cause copper metal reduction (Table 1). Copper oxide Cu4O3 is known as paramelaconite material in tetragonal shape. Plants contain a wide range of secondary metabolites included phenolics help a vital role in the reduction of copper metal ions yielding nanoparticles [24]. Thus, ideally be used for the biosynthesis of nanoparticles. Copper oxide Cu4O3 is known as paramelaconite material in tetragonal shape. Copper nanoparticles synthesis by using green tea has Nano range particle size confirmed by characterization [25-28]. This is One-step processes in which no surfactants and other capping agents used.

Aims of Study

The main aim of the study was
To extract copper nanoparticles using camellia sinensis leaves
a) To characterize the copper NPs
b) To study its potential to degrade azodyes
c) To find out the effect of different experimental parameters on %degradation.
d) Kinetic study of adsorption of congored and malachite green dye

Method

Material and Method

The material used for the preparation of copper nanoparticles Cu4O3 includes copper sulfate (CuSO4.5H2O from Sigma Aldrich) and camellia sinensis leaves (from botanical garden of institute) for the preparation of green tea extract. All chemicals used were of analytical grade and pure (Figure 4).

Preparation of Green Tea Extract

Green tea leaves of 30g were taken and then washed with distilled water. further, the leaves were dried and then ground. The powder of green tea was used in the formation of extract [29]. The 100ml of deionized water was used. Later, the solution was boiled for 10 minutes and subsequently kept at low temperature after filtration.

Preparation of Cu4O3 Nanoparticles

A copper sulfate soln. of 50ml was added into 5ml of green tea extract. Magnetic stirrer was used for stirring. The color changed from green to pale yellow and finally dark brown confirmed the formation of nanoparticles. After the formation of nanoparticles, solution was centrifuged at the speed of 1000rpm for 20 mins. After the removal of supernatant copper oxide nanoparticles were dried and washed with ethanol. At the end calcination was performed at 500 degree for one hour and resultantly black colored particles were collected for characterization [27-29].

Results

Characterization of Cu4O3 Nanoparticles

UV spectrophotometer, X-ray diffractometer (XRD), Fourier transform infrared spectrophotometer (FTIR) and Scanning electron microscope (SEM) were used in order to characterize the size, shape, chemical and structural composition of Cu4O3 nanoparticles [30]. During the study, the green color soln. transformed into dark brown which confirm the formation of copper oxide nanoparticles.

X-Ray Diffraction Studies

The X-ray diffraction pattern of copper oxide nanoparticles were examined by x-ray diffractometer. To determine the intensity of copper oxide nanoparticles, the powder was added in the XRD cubes for analysis. The resultant pattern of the copper oxide nanoparticles was matched with JCPDS card number (033-0480), the peaks at 2θ intensity 28.09, 30.61, 36.14 and 44.14 and have 112, 103, 202 and 213 patterns respectively. However, average crystal size calculated by the Scherrer equation keeping lemda at 0.154 and FWHM value calculated 0.5 found was 17.2nm. The shapes of the particles of Cu4O3 nanoparticles in XRD was tetragonal [31-33].

Name and Formula

Reference code: 00-033-0480
Mineral name: Paramelaconite
Compound name: Copper OxideEmpirical formula: Cu4O3
Chemical formula: Cu4O3

Ultraviolet Spectroscopy:

The range at which copper oxide nanoparticles appeared was 200-400nm. The maximum absorption peak was confirmed at 280nm which confirmed the copper oxide nanoparticles (Figure 6).

FTIR Analysis:

In the current study, FTIR spectrum was examined to determine the copper nanoparticles functional group peaks. The overall peak was observed in ranged from 400 to 4000cm-1. The spectrum at peak 3310.7cm and 1611.2cm revealing the (Figure 7) presence of alcoholic group. The bands at 3310.7cm- 1, and 2850cm-1 another functional group present are listed in table below (Table 2).

SEM Analysis:

The average particle size of copper nanoparticle was analyzed by SEM model (JSM-6480). The range of grain of copper oxide nanoparticle was calculated about 8.5 ×10-2mm by SEM micrograph. The prepared copper oxide nanoparticles were well dispersed. It was observed that particles were smooth with a tetragonal shape (Figure 8).

Removal of Malachite Green and Congo Red Azo Dye by Cu4o3 Nanoparticles

Preparation of Standard Solution: In 1-liter distilled water, the dye was dissolved to prepare 1000ppm solution of malachite green and Congo red. From stock solution different concentrations of dyes were prepared. After dilution from 1000ppm solution to 100ppm solution was prepared. From that 150, 200, 250-ppm solution were prepared. Efficiency of Color removal was calculated by percentage degradation formula
% decolorization of dye= A-B /A×100.
Where A and B are absorbance of dye solution without nanoparticles and with particles respectively.
Mechanism of Azodye Degradation
50 microliter of the hydrogen peroxide H2O2 was added as the oxidizing agent to yield hydroxyl radical. Catalytic activity process mainly depends on the formation of superoxide anion radical and hydroxyl radical. The concentration of CR and MG dyes in aqueous solutions were measured by UV–vis spectrophotometer. A reducing agent H2O2 was added with adsorbent to check the adsorption capacity.

Effect of Experimental Parameters On % Degradation of Dye Removal

Time effect: Effect of time on percentage degradation of azo dyes was also studied by UV spectrophotometer. The samples of copper oxide NPs synthesized by green tea C-1, C-2(GT) were calculated. The time required for removal of above said dye was between (40-45min) and percentage removal was observed for all samples between 70-75%. The result of graphs clearly shows the time effect on color degradation of azo dye malachite green-MG and acid red 28-CR by using adsorbent copper oxides nanoparticles. The experimental conditions during experiment were kept constant just like temperature 308 kelvin and initial concentration of adsorbent was within ranges from 20- 250mg/l. Samples C-1, C-2 are samples codes synthesized by camellia sinensis leaves extract at different temperatures. In figure below C-1 sample is dye+ adsorbent +H202 and C-2 sample without reducing agent. It was concluded from graphs %degradation enhanced in presence of reducing agents. Figure 9 Effect of time by copper oxide nanoparticles samples C-1, C-2(Green tea mediated) on malachite green dye and Congo red dye calculated by ultraviolet spectrophotometer DB-20.
Adsorption Kinetics Studies: The kinetics of azo dye adsorption was carried under selecting optimum operating conditions. The kinetic parameters are helpful for the estimation of adsorption rate. A solution prepared by dissolving 20mg of adsorbent in 50ml of 10ppm dyes and continuously stirred.
Adsorption Kinetic Studies of Copper Oxide NPs: The pseudo-second-order model was found to explain the adsorption kinetics most effectively. The results indicated a significant potential of nanoparticles as an adsorbent for azo dye removal. The straight line shows that nanoparticles follow pseudo-second-order kinetics rather than first orde

Adsorption Reaction Isotherm Models

Langmuir Isotherm Model: The Langmuir isotherm is applicable for adsorption of a solute as monolayer adsorption on a surface having few numbers of identical sites. Langmuir isotherm model provide energies of adsorption onto the plain. That’s why, the Langmuir isotherm model is selected for adsorption capacity relating to monolayer surface of adsorbent. Adsorption process fits the Langmuir and pseudo-second-order models. Langmuir isotherm or single crystal surfaces describes well adsorption at low medium coverage, adsorption into multilayer is ruled out. Parameters of different models studied in this research are listed below in Table 3.
Freundlich Isotherm Model: The Freundlich isotherm model is suitable for the adsorption of dye on the adsorbent. Freundlich equation is stated below
In qe = Kf qm+ 1/n InCe
qe is the amount used of azo dye in unit of mg/g, Ce is the equilibrium concentration of the azo dye and Kf and n are constants factors affecting the capacity of adsorption and adsorption speed. The graph between lnqe versus ln Ce shows linearity. The adsorption reaction isotherms are fitted to models by linear square method. The result shows in this study that Langmuir model fit better than the Freundlich model. The adsorption activity of copper oxide nanoparticle samples prepared by green source were observed against the degradation of malachite green and congored azodyes (Figure15).

Discussion

In present we reported an eco-friendly and cost-efficient preparation of copper oxide nanoparticles by leaf extract of camellia sinensis. the characterization of particles were performed by SEM, UV, XRD, FTIR analysis. UV spectroscopy peak was observed at 280nm and a broadband observed which confirmed nanoparticles existence. The particle size was calculated by Scherrer equation was 17.26nm. The SEM results confirmed tetragonal shape of cu403 particles with grain average diameter 8.5×10-2nm, and FTIR spectra indicated the peaks of OH, C=C, C-H functional groups, which is due to thin coating of extract on nanoparticles. The calculated surface area of nanoparticles was 65m2/g. The %degradation of azo dyes malachite green and congored range were b/w70-75% at maximum 0.2g/l and 20mg/l dosage of adsorbent and dye. The optimum time was b/w 30-40mint, PH 3-4, temperature 70-80 Co for maximum degradation. The effect of different experimental parameters was studied on percentage degradation of dyes. The azo dyes congored and malachite green dyes adsorption isotherm models were studied. The reaction kinetics followed pseudo second order for both dyes rather than first order. The Langmuir model fit better with linearity rather than Freundlich, which confirmed by graph having r2 0.98,0.99and0.95 values for models. The elovich model also linear fit. In conclusion, copper oxide nanoparticles keep excellent azo dyes degradation potential.

Conclusion

In present we reported an eco-friendly and cost-efficient preparation of copper oxide nanoparticles by leaf extract of camellia Sinensis. According to kinetic study it proved that Cu4O3 NPs keep excellent adsorption capability for MG and CR azo dyes.
https://lupinepublishers.com/chemistry-journal/pdf/AOICS.MS.ID.000174.pdf
https://lupinepublishers.com/chemistry-journal/fulltext/kinetic-isotherm-studies-of-azo-dyes-by-metallic-oxide-nanoparticles-adsorbent.ID.000174.php
For more Lupine Publishers Open Access Journals Please visit our website: https://lupinepublishersgroup.com/
For more Open Access Journal on Chemistry articles Please Click Here: https://lupinepublishers.com/chemistry-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
Follow on Twitter : https://twitter.com/lupine_online
Follow on Blogger : https://lupinepublishers.blogspot.com/
submitted by Lupinepublishers-OCS to u/Lupinepublishers-OCS [link] [comments]

ANSWERS: Mastering engineer Alain Paul (Tommy Four Seven, Paula Temple) responds to your AMAs

Back in May, I posted the AMA for mastering engineer and producer Alain Paul. Since Alain isn't on social media, we collaborated together offline to compile his responses to all your questions. Here are his answers, and there are some real nuggets of truth hidden here. I highly recommend you read through them all if you are at all interested in techno production or mastering in general.
What traits would you consider important for a person, independently of his (production) skills? What would be one of the best skills/traits to have as a person which can be passed on to your production mindset and your overall sound quality? (via maka (Discord))
Someone who wants to be a mastering engineer should have the personality of a robot. The more like a robot you are the more tracks you can master. For me, not being a robot, I struggle to work on tracks in a conveyor belt fashion and absolutely need to take lots of breaks and days off so my capacity is far lower than some other engineers who I know who sit there 8 hours a day and bosh tracks out like machines. But that’s mastering. If you are asking about creativity, I find that the opposite is important. Don’t be a robot. Be weird, wonderful, unpredictable, arrogant and all the things your average employer doesn’t want to hear….. but you need consistency and perseverance otherwise you will never make it. Most guys I know who have success have been going at it for many years.
When it comes to techno, what steps do you usually follow to master a track and are there issues we should consider that most tracks have? (via Caen83)
Often the kick isn’t strong enough. Hats are too loud. Stereo imaging is not mono compatible. They are the main problems I see on a routine basis.
What are the top 3 most common mix critique fixes you give, excluding simple balancing (hat too loud etc) and too hot mixes (peaks too high/clipping)? (via Arry_Propah)
Well, hats too loud is probably the third most common. Hats could also mean in this context shakers or any kind of high perc which is not sitting in the mix. Mostly that is just levels but it can also be EQ. Often people will try and view their mix in pigeon holes. They want the kick to occupy a certain frequency range, the top line to be in another frequency range and the hats to be in another etc. But the end result of this method of mixing is very often an over-EQed sound and I will usually get the stems and try make the frequency response of the sounds more balanced again and bring back some of the detail lost in the mix by this style of over EQing. Second most frequent thing hat got to be weird stereo imaging / mono compatibility issues. Especially with less experienced artists, there is a tendency to put ultra stereo widening stuff on all the sounds or even on the whole mix. This is one of the worst things you can do while mixing and I reject a lot of mixes because of this. It is far better to mix completely mono than mix “over wide”. But of course the best way is to mix with a strong mono image with supplementary stereo effects to make it sound nicer, but going crazy with the stereo invariably kills the mix. And in first place, by far the most common one is not getting the kick to sit right in the mix. And that isn’t just a level thing. Over the years I had to deal with a lot of kick problems and find a lot of different solutions, anywhere from EQ to gating to sample triggering. The kick is the most important part of most dance tracks so it has to sound right.
Is there any approach we can do during mixing that would make master EQing come out better? Things we should avoid or things we can push (via brucereyne)
Every track is different and everyone’s mixing tastes are different but some general rules do apply especially to techno or electronic dance music generally, such as: the kick is often the foundation of the track, if any other element of the mix is significantly louder than the kick, or the kick seems quiet, you should probably reconsider or at least be aware that this choice is unusual. HiHats should not be too loud. If you turn the mix up loud and the hats hurt your ears then they are too loud. If you have some kind of sub bass or bass line, this should generally not be louder either in terms of perception or peak level than the kick drum. If it is, the bass might be too loud or your kick might be too quiet. Jungle / Drum and Bass can have exceptions to the kick / bass ratio but techno can rarely have a feeble kick and still sound great.
whats the biggest advantage and disadvantage of a multiband compressor vs a single band compressor as a main "glue" compressor in the master chain. (via gombocrec)
I find the biggest disadvantage of using a multi band compressor on the sum is that it generally will just add huge amounts of mush and transient degradation and significantly decrease the quality of the mix, so I generally will stay away. But the advantage is that it can sometimes save a poor mix where the session has been lost and there aren’t any stems, if there is some weird sound that jumps out etc. Using it as some type of “glue” though is generally a bad idea in my eyes and I see a lot of inexperienced people doing this with bad results. Just because you can get things louder it doesn’t mean it is better. Very rarely is multi band on the sum a desirable thing in professional mastering.
What would be your number one tip for creating a sparkly high end that isn't harsh? Is it simply a case of some choice eq moves? Is a very focused compression band on the high end a good idea? (via Willlockyear)
I think this question is a compositional question disguised as a technical question. Let me explain…. Go and switch on a 909 or equivalent, software or hardware it doesn’t really matter, run your finger across all the steps on the hihat channel and press play and listen loud to the constant 16th note hats. After a very short amount of time it should start to fatigue your ears an insane amount. You might feel your ears “compressing” or just feel like you don’t want to listen to this because it is unpleasant. Now, if you dial in a very loud, long, full, bassy 4/4 kick, the hats will hurt your ears much less because you aren’t just getting blasted in one frequency range. The difference is huge and you haven’t used any EQ, compression or studio tricks, it is simply compositional. Back to mastering…. I will sometimes get a mix where the artist thinks the top end is harsh, then I listen to the mix and it has constant loud hats. Well it is not even about the mastering or mixing process, constant loud hats with no variation are just simply harsh. And it made worse if you have a very short, tight kick and not that much bass going on in the track generally because there is no frequencies from the bass balancing the high frequency assault of the hats. So rather than thinking about reaching for a compressor or EQ, try to change it compositionally by using side chaning on the hats or making the kick fuller or longer, or adding a thicker bassline, or sparsen out the hats a bit. When you have a great sounding mix in terms of composition, then it is much easier to get a great sounding mix technically and much less work is needed in mastering. But if you’ve done all than and are still looking for a super crisp top end, there are some tricks. Either using stuff like shimmery reverbs on your pads etc or try bussing some of the percussion sounds to two busses. A wet bus and a dry bus. On the wet bus you can boost the high frequency EQ a lot into a distortion. Then turn down the wet bus very low in the mix and feed it in until it thickens the highs but doesn’t become obvious.
What are some more creative techniques for gluing a track together besides reverb and compression (i.e. if you want to keep a track as dry as possible)? (via rorykoehler)
You say besides compression…. Well I totally get that it is all too common to slap an expensive compressor across the sum and fool yourself into thinking it sounds better because it is expensive. The more someone pays for a hardware compressor or the more shiny the plugin interface, the more people tend to hear magical “glue” properties. I personally think much of that is nonsense. Simply running everything through a stereo compressor isn’t the solution to sticking your mix together. The solution is crafting a nice mix and more importantly the compositional process itself. But this is exactly where compression comes in. If you aren’t using side chain compression, or using your modular system or Ableton modulation sources to really create dynamics and interplay between sounds then your mix won’t sound glued together because the elements in your tune aren’t vibing together. If you use side chain compression, gate dynamics, VCA and VCF modulation with LFOs and subtle envelopes from loads of triggers, your going to create a huge amount of dynamics as part of the compositional process and this will serve to glue everything together as part of the compositional process. And you will never want more glue as part of the mix because the tune will already vibe. In the mastering process, if a tune needs more glue, I will never run it though a stereo compressor or feed in reverb or whatever tricks other people reckon create glue. Generally I am going to be asking for stems and I will add some dynamics and interplay between the sounds using whatever modulations are appropriate for the tune.
The biggest thing I struggle with is lack of visibility below <50Hz (with my nearfields) and how that impacts my productions. Given the importance of these frequencies in techno it feels like painting with a blindfold. Other than cross referencing with headphones/subpac is there any other advice you could offer? (via MrSkruff)
You just need decent headphones. Don’t try and look at the sound on an FFT. I know some mastering engineers who religiously look at their FFTs to understand what is happening at lower frequencies but this is a total amateur mistake unless they are using very specialist software. This is because each bar on a spectrum analysis chart represents one “bin”. And if you switch to a line graph, you don’t get any more detail, it is still just the same bins but with a line drawn between each. The amount of bins are determined by your window size… it is not uncommon to use 1024 bins across the spectrum analyser. Think about that, only a thousand data points across all audio frequencies. Mostly commonly the accuracy is linear. This means, to cut a long technical story short, you only have a few data points under 50Hz. Maybe you might have only two data points, it depends on the window size. So what are you going to find out with two data points? Basically it tells you almost nothing. It is totally useless. So you might think, OK well then why don’t I ramp up the window size to get more accuracy? You can do that, you could have a window size of a million. The problem is, it will take a million samples of audio playback before you have a reading so you will have an unusably slow spectrum analyser. So there is a huge tradeoff between speed and accuracy. Either the FFT is so slow you can’t use it, or it is so inaccurate that you can’t use it. Either way you can’t use it for low frequencies. So get some decent headphones. If you are on a budget, get some medium price Sony ear buds and you can at least use them to listen to music on the train. If budget, size and weight is less important, grab a pair of Audeze LCD2 - and I’d check out the closed back version too - or other good planar magnetic headphones.
On the mastering chain, do you cut/roll off frequencies below 20hz? On the mastering chain or kick/bass groups, do you mono the low frequencies? For example, I often use the 'Utility' in Ableton to make <100-150hz mono. (via zimoofficial)
In mastering there is nothing that you do just because “you are supposed to always do it this way”. So I do not cut frequencies below 20hz as a routine thing. But if there is a DC offset, which seems to be more common with my house / disco clients as they run their mixes through all sorts of weird and wonderful vintage gear, I will use low shelving or high passing to get rid of unwanted stuff outside of the intended audio band. Narrowing the stereo image in the bass frequencies is something I do a lot of when artists have an unfocused stereo field. There is little benefit to having “wide stereo bass”. You struggle to cut it to vinyl, it leads to unpredictable results in clubs and in my opinion it doesn’t even sound good anyway. I generally try not to have a “sound” as a mastering engineer, other than well balanced and professional, but one thing I will happily accept as a characteristic of any “sound” I might have, would be you don’t get swirly, murky mud bass with my masters. No mud shall pass.
How often are you EQing to correct something in a mix as oppose to EQing just for tone? In regards to EQing for tone- if this is something done often- are there certain frequencies that you adjust/accentuate based on the genre you’re working with or based on an individual song basis? For example- many modern songs have the “smiley face curve” on the analyzer - bumped lows, scooped mids, bumped highs (via brucereyne)
Generally if there is something wrong in the mix, I will request stems or give mix feedback. I will only be very invasive with EQ if the client has lost the original session and it sounds bad and I need to be heavy handed to save a bad mix. The sound I shoot for in terms of tone, I am always looking for a balanced sound. I never EQ with a deliberate smiley curve just because that is “somehow supposed to be good”, because if you do this you lose the power and details of the mids. If you always EQ bright then you lose the warmth of the lows. If you always add lots of bass you lose the clarity of the highs. The only way which I think sounds good is to have a balanced sound. However, if you look at different genres on a spectrum analyser you might notice different kinds of general patterns but the variation is too big between songs in each genre to have that as any useful indicator of the way you should master a track. So stuff like EQ matching is all pretty much just nonsense in my opinion.
Different styles and subgenres have varying tonal and dynamic characteristics. How do you as a mastering engineer account fojudge this in determining whether a submitted track is within parameters of a "good mix"? E.g. Harsh Mentor - Salve is quite different from Tommy Four Seven - Dead Ocean. (via BedsitAudio)
Some mastering engineers do what I call “genre curving” and I used to be guilty of this myself when I first started out with mastering before I really knew what I was doing. When I first started out I was using Izotope Ozone back when it was quite new, I’m pretty sure it was version 3. Anyway you could take “snapshots” of tracks and I took a bunch of snapshots of reference house and techno tracks and figured out that they were very similar how they looked. So I just used to match the curve of the track I was attempting to master, to the reference. And that was it. This is how I started off around about 15 years ago trying to understand how to master stuff but obviously this is not very professional. Sooner or later I realised that if a track had a longer kick drum it would have more bass on the curve than if it had a shorter kick drum, which lead me to reduce the bass too much on the long kick drums and boost the bass too much with the short kick drums and then it would either sound feeble or distort easily, and I wouldn’t get the right volume and it didn’t sound very balanced. So then I felt like I had no more reference point and no benchmark to achieve any consistency….. as my attempt to achieve consistency ironically just ended up making things sound even less consistent! The solution is that you need to listen to a ton of music critically and you slowly develop an ear for what a balanced track sounds like. It’s like trying to ride a bike. At first it seems hard and you don’t really know what you are doing, but once you have developed the feel for it, you are able to do it. But just because you can ride a bike it doesn’t mean you are going to be good enough to ride a halfpipe. For that you need lots and lots of practice and there is absolutely no shortcut. If you try and drop in on a huge halfpipe first time because you have read a book on BMX, then you will just hurt yourself. Same with mastering. There is no technical knowledge or trick you can use, it is all just lots of practise.
What do you believe are the biggest trends in techno production and mastering right now? Where are we heading? (via teegeeteegeeteegee)
Mastering is all over the place in techno because you have a mixture of engineers. People sending their stuff to professional mastering studios and getting a proper job done but also artists trying to do it themselves and ending up with weird results. When working with someone new, they might send me a badly mastered track as a reference and say “I want this loudness” and also send me a professionally mastered track and say “but I want the richness and clarity of this track”. And I have to explain that the loud one is distorting and sounds like someone throwing a bag of spanners down the stairs whereas the professionally mastered one is slightly quieter but actually sounds great. Anyone can make anything sound loud by smashing it through a distortion plugin and boosting the high frequencies but that isn’t the way to make something sound great. The problem is, when DJs play a mixture of unpro mastered tracks with professional tracks, either they have to use the gain knobs (which of course any good DJ would normally do) or the unpro mastered tracks will sound louder. There is a tendency to hear a louder track as sounding better just because it is louder (this is the classic mastering loudness war thing) but the issue in techno is that it is possible to just run an entire track through a distortion unit whereas more other genres you can’t. So there is a practical limit of common sense in most other genres but in techno, especially with the tougher stuff, there is seemingly no need for common sense in certain parts of the scene when people think the clipping and insane distortion sound good. There isn’t anything necessarily wrong with listening to a square wave if that is your thing, but you just cannot expect to get a richer more complex dynamic track to sound equally loud. Most decent artists absolutely understand this though and don’t care about the extra loudness when it comes at the cost of sacrificing everything else
Given that modern techno requires such a cohesive sound, do you recommend producers work with comp/limiting on the master channel pre mastering? Does you have artists that give you looser mixes to allow you to do higher quality comp/limiting in the mastering stage? (via teegeeteegeeteegee)
Most artists I work with use a limiter (or just straight clipping) on the sum while they are composing and mixing the track. You can go as crazy as you want with limiting while working on your music. But the second you send it to be mastered you need to bounce the tracks with the limiter turned off and any compressor or saturation you have on the sum need to definitely be turned off otherwise I will reject the mixes. Sometimes the artist will send a reference with a limiter and it might even be louder than my master. But the artist can pretty much always hear that my master sounds better and more balanced and so I do not try and “beat the loudness” of their demo masters. Everyone I work with values a high quality end result more than a crap result which is extremely loud. And I know this because I refuse to work with artists that only want loud. But sure, when you are composing feel free to use limiting and I actually do recommend working with or at least checking your mix with a loud limiter setting because you can often pick up very quickly on soggy sounding kicks or unreasonably loud bass etc.
Do techno producers these days tend to cut too much low end in their mixes? What tips would you give us for tighter low end that would work in a club setting? (via sonicloophole)
There is not one trend in the mixes I receive. I’d say that over half the mixes are too dull and a very large amount are too bright. It is the vast minority which have perfect tonality. Some significant and increasing portion of the mixes I receive have nonsensical stereo widening and out-of-phase elements. The increase in use of stereo widening plugins is causing issues for people’s ability to mix nicely. The best bet is to uninstall any stereo widening plugins you have. If it sounds “super wide”, it is probably just out of phase and will disappear when played in mono leading to a low quality feeble mix. Always check mono.
What is your all-time favourite techno track production wise (if it's more than one that's also fine ofc). (via Dr_eyebrow)
There are so many tracks out there which just sound perfect in terms of their technical presentation / sound quality. This has been made very easy by artists using pristine quality sample library sounds in their music and the increasingly easy to use DAWs like Ableton. But when I listen to music, especially techno, it’s not the technical presentation which makes a track become one of my favourite, it is the creativity of the track and how it makes me feel. That’s why when I make my own music, I step well outside of the zone of being a mastering engineer and write stuff which doesn’t necessarily have the best sound quality but makes me feel something (like SHARDS - Three - A2). So my taste in techno in terms of my favourite tracks follow the same idea…. So for example I remember when Tommy Four Seven made Armed 3 a decade ago and I heard it in Berghain, that was something new for me and the track stuck with me as being this weird and brilliant anomaly of techno before anyone else was really doing that kind of sound. Or when Szare released Scored, that was a real favourite of mine at the time, whether you can call that strictly techno or not. Like stuff which you can’t work out if it is pretending to be techno but really isn’t or if it is actually techno but is just an anomaly. Who is to say? Ancient methods - Drop Out was the coolest thing when I first heard that. SØS Gunver Ryberg makes some crazy material. SNTS and Headless Horseman make some of my favourite dark rolling tracks. Maybe I’m just influenced by the fact that I’ve worked with those artists but I will often hear one track somewhere and immediately fall in love with the creativity amid a cloud of good sounding average tracks. Making your track sound good in a technical way is important, but the creativity to make something which breaks the mould is much cooler.
What techno genre is hardest to master? Industrial techno has harsh transients, melodic techno has a larger dynamic range, etc. (via dangayle)
To me everything is the same difficulty to master in terms of subgenres. It isn’t really the style of music it is the specific track which might be difficult and it generally has more to do with the person who composed and mixed the track. A pro melodic techno producer will submit an equally good quality mix to a pro industrial sounding producer. It is generally the inexperienced producer which create more of a challenge.
Is it easieharder to master tracks that were created fully in the box vs tracks that come from modular or other live performances? (via dangayle)
Not really, it really depends on the material. Actually modular setups can sometimes create weird frequencies and be harder to manage than purely digital in the box sourced sounds. Also you can get a higher noise floor with modular gear to the point of it being really problematic. Despite this I am a huge fan of eurorack.
What is the best book on mixing and mastering? Old or new. Analog and digital. Thank you. (via MILOFUZZ1)
Books don't teach you how to mix, an internship in a decent studio does. I've done a bunch of unpaid internships in my time and by the time I joined Calyx Mastering in 2014 I thought I was pretty good, up to that point I had been earning a living from Mastering for around 6 years and out of the many applicants and after their very difficult job application mastering test, I was the one that got the job. Then the first day I started working there I had my ego deflated and suddenly felt like a complete amateur with the super high quality expectations there. By that time I already knew all the theoretical stuff you'd read in a book - it was the experience of working in a team of elite engineers which taught me the biggest lessons, not the theoretical stuff.
How do you feel about using the following on the master buss: Saturation, Stereo widening, Mono-izing low frequencies, Low cuts between 10-50 Hz, Hight cuts between 15-20+ kHz, Using AD style clipper at the end, Multiband or standard compression for glu, (via fukinay)
Saturation: generally a bad idea unless it is in parallel Stereo widening: disaster, don’t do this Mono bass: generally a good idea Low cuts: generally not necessary unless you have a DC offset or problematic stuff High cuts: not generally necessary unless you have TV frequencies Clipping: bad idea Multiband compressor: bad idea Stereo compressor: generally a bad idea unless in parallel
In a untreated room, while using sonarworks or ik multimedia Arc2, how accurate can the mix and mastering be? (via Sonictrade)
Speaker correction does just that, it corrects the speakers. It doesn’t correct the room. Stuff which claims that it is room correction is generally a gimmick. This is because a poorly treated bad sounding room has problems in both the frequency domain and more importantly time domain. So you set your mic up to measure the response at your listening position and you do the sweeps and come up with a correction curve. Great, you have corrected the frequency response if you head is exactly where the mic was. Move a bit to the left or right, or back or forwards and you lose the sweet spot. Now sitting in the new position you might have a worse (deeper valley or higher peak) than you had with the room correction turned off because you may have moved out of a high pressure standing wave into low pressure in respect to those frequencies. So where you sit is very important in determining whether you are going to get the “flat” frequency response or a completely messed up one. In practise, if you stay generally in the right position the frequency response might possibly be good enough to work with but then you have a whole new problem which can be even worse than having an uneven frequency response… that is the problem of resonances. Especially in the lower and lower mid frequencies. This makes certain notes sound longer than they are. If you have a resonance around 50-60Hz you will always have a completely inaccurate understanding of how your kick sounds and when you play your mix elsewhere it is possible that your kick sounds very short and feeble whereas it sounded huge and beefy in your studio room. This is why speaker correction solutions should be seen as supplements to room treatment and second in line, not first in line. Getting some bass traps and basic acoustic treatment doesn’t cost huge amounts… if you have a modular system you can probably afford to treat your room. But if you are on a budget it is very easy to make DIY solutions using rockwool based DIY traps. Just make sure to use a mask and a very thin layer of plastic under the fabric to keep the fibres from escaping through the fabric and being breathed in.
Kind of curious the theory behind why one of my mixes that hits at -8 LUFS sounding softer than another mix at roughly the same LUFS. Is there an element in my mix that is hitting harder, say my kick, that is louder in one and taking up more of my headroom? (via Dudemanbro88)
LUFS is not an accurate determiner of loudness despite the fact that it was designed specifically to do just that and everyone now seems to think it is a more accurate determiner of loudness than their own ears. It is actually quite difficult to create a calculated number to say how loud humans will perceive sound. Traditionally everyone has used RMS but it is well know that RMS is very bass influenced. That is, if you have a very bassy recording and a very trebbly recording and then normalised them to the same RMS value, the bassy recording would sound much quieter. So the broadcast industry experts came up with a solution using the K weighting system to deemphasise the influence of bass frequencies on the meter readings. And this is what LUFS is. It isn’t a perfect system and it doesn’t even come close to resembling Fletcher Munson curves. I personally don’t care all that much about LUFS. It is useful in broadcast standards but not so useful in mastering for club music, at least not yet.
Any tips to avoid the dreaded "mud" when trying to put together an extremely bass heavy track? I really seem to like tracks that have a lot going on around that 40hz mark, but its a very hard area to monitor and mix properly! (via NothingSuss1)
40Hz is a bit too low to reproduce well on many club systems. People think that club systems are big and powerful and can rumble strongly at any frequency they throw at it. The truth is, while club PA systems are generally very big and powerful, it takes a crazy amount of power and also good room acoustics to successfully reproduce frequencies in the 30-40Hz range with visceral loudness and low distortion. If you test drive your tracks regularly in clubs you will see that staying closer to the 50Hz - 65Hz range for kick frequencies is often a safer bet. You need to turn those very low frequencies up loudly in your mix to get them to cut through and then you end up with mud. So it is less of a mix thing and more of a compositional thing to create a mix with low amounts of mud. Or you could also celebrate the mud. Maybe listen to some Sunn 0))).
What is your opinion whether mastering process should influence how well and pleasant the music sounds, or only and exclusively affect the loudness and conformance to standards? (via fourthtuna)
I generally work with the artist to achieve the best possible sound, whatever that takes, but I will not intervene in the creative / compositional process. If you think that it is maybe sort of unfair that some people get external help in making their tracks sound better, then I’d say that, although having a professional mix and mastering job is very beneficial, if the actual tune isn’t good in terms of artistry, then no amount of mastering is going to make it a decent track.
Is analog mastering better than digital? (via Caen83)
Today there is no such thing as analogue mastering. There is mastering exclusively with hardware…. In which case you might use a hardware limiter such as the Waves L2 but this is digital not analogue. Then you have to convert it back to digital at some point if you want to release the music digitally anyway. If you take analogue mastering to mean analogue EQ and compression, then what happens if you don’t need to use compression? Then all you mean by analogue mastering is analogue EQ. In which case, is analogue EQ better than digital? I’d say not necessarily. I do use analogue EQ but I don’t know of any analogue EQ that can be used as a ganged stereo dynamic EQ. So limiting yourself to using only analogue EQ would be a huge downgrade. In short, in modern times, analogue mastering (whatever that is taken to mean) is generally worse in my opinion than a hybrid or fully digital approach.
With plug-ins becoming more and more powerful, Acustica emulating high end tube EQs, and even Softube with the 1:1 Weiss EQ and Compressor, do you think mastering will ever change from analog to hybrid, with just converters and plug-ins? (via secus_official)
It already changed years ago. Very few people do 100% analogue mastering because the limiters are pretty much always going to be digital and the end format is pretty much always digital too. You only generally get all-analogue mastering for speciality projects, like recording to tape and then mastering from tape to vinyl with no digital gear. So in this sense, the whole mastering industry had already gone hybrid many years ago. In 2020 I’d hazard a guess at saying that there are more digital mastering engineers than there are people using analogue EQ. The Weiss gear by the way is, and always was, digital. If what you mean is not analogue but “hardware”. Well I don’t really know how meaningful that is. If you have the L2 or the Weiss stuff running in a box in a rack or on your computer if it is the same code processing the digital signal. In fact many engineers sold their hardware L2s because the newer plugins sounded better.
What are some of your favourite tracks you mastered and can you tell what exactly you like hearing in them and mastering them. (via arneleadk)
Tommy Four Seven’s album Veer was an especially cool album to master. To me that album is an obvious landmark in modern techno. Because of the complexity of the production and the massive amount of layers and detail Tommy likes to use in his tracks it was a big challenge to get sounding as weighty as it needed to be whilst preserving all of the details, clearing some of the mud caused by the complexity in the low end, getting the optimal stereo image to sound wide and full but at the same time be very mono compatible. It had to be loud yet dynamic and hard hitting but graceful in the detail of the sounds. It had to do everything all at once which is the most difficult thing possible in mastering because mastering is normally a balancing act.
What is the difference between tracks you get from seasoned professionals (Paula Temple, T47) vs those you get from new producers? (via dangayle)
Generally the quality of the mixes are instantly recognisable and they don’t make common errors like having the hihats far too loud in the mix etc. Also they know what works in a club and what will cut through on the sound systems and they won’t compose tracks with sounds which don’t translate well in those environments. Beyond the music itself you can generally tell someone who is a pro by the lack of concern for control over the mastering process. When I get a track from one of my long term record labels or artists, a wetransfer email will turn up in my inbox with no note. I master whatever it is and send the masters back and invoice them. They pay the invoice within a week and that is the end of the process, no revisions. With new producers, the same kind of job will take 20 emails and maybe a revision or two after I have requested stems and given mix feedback.
From a mastering engineer's perspective, should producers have their tracks mastered before shopping them to labels, or should they leave that up to the label itself? (via dangayle)
Generally labels like to get their stuff mastered by their own preferred mastering guy and they could even suggest changes to the tracks before they signed them. So there is a reasonably high chance that you will not actually release the masters you pay to get done, and they will need to be redone. However, the question is whether having the tracks mastered so they sound their best, might actually have gotten the attention of the label… maybe if it had not been mastered and sounded a bit more rough, the label may have overlooked it. I would generally advise mastering your stuff if you are confident with the tracks and have the budget as it could be the edge which gets you the deal.
Do you master your own productions as Shards/These Hidden Hands, or are you too close to the music to be objective? (via dangayle)
I have mastered every Shards and THH record. Objectivity comes with time away from listening to the music. You cannot make a track and master it the same evening but you can make an album, have a two week holiday and come back and master it with an increased amount of objectivity, not optimal amounts but enough to do a pretty good job if you can focus. Generally the test is, listen back in a year and if you think “oh shit” then you should probably ask another engineer next time. But with Shards and THH I still think I did a good job looking back, in fact I use one of my Shards tracks as a calibration / reference track and I think that our second THH album, Vicarious Memories, is one of the best album masters I’ve done and I use the track The Telepath as one of my most important references for testing new monitors and headphones. It seems to work for me but some other mastering engineers insist on having other people masters their own music. I guess it would be interesting to get another engineer to master the next THH record and then compare it with my own master to see if my objectivity really is impeded… but then again, last time I did that with a Shards track which came out on another label, I had to end up submitting my own master because I hated the master their engineer came up with.
submitted by dangayle to TechnoProduction [link] [comments]

1 year of Keto/IF, by the numbers

(I wrote almost all of this last week, but was waiting for my 1yr lab results to come in before posting.)
After 15 years of slowly packing on pounds through stressful jobs, poor sleep, and indulgent meals, and also a few years after Metabolic Syndrome and NAFLD diagnoses, and a few failed "get healthy" attempts, I was feeling especially run down last year and decided that I really should be smart enough to figure this out. A few weeks of research later, it became pretty obvious that all the nutritional knowledge I had ever been taught, told, or thought I knew (from the food pyramid on) was laughably wrong. I dove feet first into doing a ketogenic diet and intermittent fasting. Here are my 1 year results.
I'm a 5'6" 38yo M and my max weight was 210.2lb (Feb 2018). I started keto/IF (after a week low-carb paleo run-in) at the end of last August weighing in at 200.1lb, and today, 1 year later, am at 153.8lb. That's -46.3lb (-23.1%) at the one year, -56.4lb (-26.8%) from my max weight: https://imgur.com/P3tLuLA
My weight "plateaued" a few months back just a couple pounds shy of my original (arbitrary) 150lb goal, but rather than forcing the last few pounds too much, I've been more focused on getting stronger and on body recomposition. For those frustrated about weight plateaus, I highly recommend taking regular body pics and taking tape measurements (my other measurements have continued to improve despite basically no movement in weight in the past few months).
Being a data nerd and diving into the research, I was actually interested in tracking my personal progress when I started and make the most of my n=1.

Body Composition

I took 3 DXA scans, one at 1mo, 6mo, and 1yr at a local Dexafit (I also did RMR twice and showed a change in RQ towards fat adaptation (~0.85 at 6mo) and very close to the expected (Mifflin-St Jeor) RMR both times. The non-DXA fat estimates are based on linear regression from the 1mo/6mo results and are included just for ballpark reference.
Max (Est) Start (Est) 1mo DXA 6mo DXA 1yr DXA 1yr Change
Weight 210.2lb 200.1lb 193.4lb 161.0lb 155.1lb -45.0lb
BMI 33.9 32.3 31.2 26.0 25.0 Normal
Total BF% 38.0% 35.7% 34.2% 26.8% 24.4% -11.3%
Visceral Fat 3.46lb 3.11lb 2.88lb 1.77lb 1.02lb -2.09lb
One interesting note from my last DXA is that -5.4lb of my -5.9lb change was fat mass, with almost no lean mass lost, which IMO reflects well on my recomp efforts.

Reversal of Metabolic Syndrome

While my A1c had always stayed pretty well controlled (although it has inched down, included for reference), over the past 10 years or so I was steadily adding Metabolic Syndrome markers. I had a solid 3/5 (positive MetS diagnosis), and now I'm at 0/5, so I'm pretty happy about that. My usual fasting glucose tends to hang around 100 (I will probably try out a CGM at some point to get a better idea of the variability), but with my A1c and TG in a good range I'm not too worried about it either way.
ATP III Before 1mo 4mo 9mo 1yr 1yr Change
HbA1C % 5.6 5.4 5.2 5.3 5.3
Waist Circumference >40" 43.0 42.0 38.5 36.8 36.1 -6.9"
Fasting Glucose >100mg/dL 100 101 92 99 89 n/c
Triglycerides >150mg/dL 396 153 95 95 -301mg/dL
HDL <40mg/dL 35 34 59 52 +24mg/dL
Hypertension >130/>85mmHg 122/78 124/84 126/74 117/74 115/77 n/c
Blood pressure is another highly mobile marker (the best way to lower it seems to be to measure again), and I did buy an Omron Bluetooth BP cuff a few months ago to try to get more frequent measurements/better averages.

Reversal of NAFLD

In 2016 I had a liver ultrasound that showed some fatty deposit buildup. What's interesting is just how fast this can reverse. Despite my AST and ALT being elevated for years (what initially prompted the imaging), those markers largely normalized within the first month.
The gold standard for NAFLD diagnosis is MRI (sometimes liver biopsy is done), but there are many proxy formulas. The LFS (which can have 95% sensitivity!) requires fasting insulin (a less good formula, FLI can be used if you have GGT), however I only have fasting insulin for my most recent labs (nothing from any of my physicals in the past 10 years). Considering NAFLD is estimated to affect 80-100M people just in the US, that seems pretty insane, but then again, I'm not a medical professional.
Reference Before 1mo 4mo 9mo 1yr 1yr Change
ALP 40-150IU/L 46 43 41 n/c
AST 5-34IU/L 48 26 25 20 25 -47.9%
ALT 0-155IU/L 113 49 29 28 22 -80.5%
NAFLD LFS <-1.413 -2.03 -2.63 reversal

Insulin Resistance

With my MetS and NAFLD, it was obvious I had some level of insulin resistance. As part of my baseline testing I wanted to get a fasting insulin with other blood work but my doctor at the time balked and said the NMR would give me an IR score already and that I shouldn't get my fasting insulin measured. I didn't argue, but I regret that now, since without fasting insulin you can't calculate the most well known/effective IR formulas (or as mentioned, your NAFLD LFS). Also, it turns out that a fasting insulin test is only a $30 test even if you have to pay out-of-pocket (LC004333). You could also get it as part of a bundle (LC100039) that is only $8 more than an A1c alone. This really pissed me off and I've since switched doctors to someone who's significantly less clueless/more interested in improving metabolic health.
Reference Before 1mo 9mo 1yr
Fasting Glucose <100mg/dL 100 101 99 89
Fasting Insulin <8mcU/mL 4.9 2.2
METS-IR <51.13 60.21 51.79 35.20 35.21
TyG1 <8.82 9.89 8.95 8.46 8.35
TC/HDL <5.0 7.32 4.88 4.71
TG/HDL <2.8 11.31 4.50 1.61 1.83
LP-IR <=45 50 32
HOMA-IR* 0.5-1.4 1.10 0.48
HOMA2-IR* <1.18 0.66 <0.38
QUICKI* >0.339 0.37 0.44
McAuley Index* <5.3 2.17 2.71
\ Requires Fasting Glucose and Fasting Insulin*
One interesting note is that a fasting insulin of 2.9 mcU/mL is the minimum valid value for calculating HOMA2-IR. My general takeaway is that my insulin sensitivity is very good these days.
Also as a bit of an aside, my Vitamin D at my 6mo check (physical with new doctor) was the highest (36ng/mL) it's been over the past 10 years (as low as 11ng/mL and never higher than 30ng/mL even with prescription supplementation), despite not getting much sun over the wintespring. Vitamin D deficiency is associated with MetS, so just thought I'd throw that in there.

CVD Risk

My LDL did jump up a bit doing keto/IF, although I'll preface this by saying that using the ASCVD risk calculator (with some fudging since it doesn't give an answer below 40yo) my risk has more than halved (3.1% to 1.3%) even with the higher LDL numbers (it doesn't actually affect the risk algorithm results except at cut-off, which should tell you something about how important LDL is as a risk factor). CVD is it's own long discussion, but I've done a fair amount of research, and on hazard ratios, and LDL (and actually even the better lipid markers) are simply not very strong risk factors for CVD compared to MetS, smoking status/history, psychosocial factors, hypertension, etc.
Reference Before 1mo 9mo 1yr 1yr Change
Total Cholesterol <200mg/dL 264 249 288 245 -7.2%
HDL-C >40mg/dL 35 34 59 52 +48.6%
LDL-C (calc) <130mg/dL 150 184 210 174 +16.0%
Remnant <20mg/dL 79 31 19 19 -74.7%
Triglyceride <150mg/dL 396 153 95 95 -76.0%
TG:HDL <2 11.3 4.5 1.61 1.83
Note: I did get an NMR at 1mo and 9mo, and furthermore, I got a second NMR and Spectracell LPP+ 2 weeks later (due to a blood draw faffle - I really wanted to match results from the same draw as advanced lipid panel results differ greatly) which I paid out of pocket for just to get some more insights into particle sizes, counts, etc (my particle counts are high but notably I shifted from Pattern B to A on the NMR, and the LPP+ shows very low sdLDL IV) but my main conclusion is that even beyond the meager hazard ratios, lipid testing is only vaguely useful in a ballpark sort of way because serum lipids are so mobile - in the two week between draws with no major lifestyle changes, controlling for fasting/draw times, there was a 14% TC difference, a 25% HDL-C difference, a 26% TG difference (causing a 41% TG:HDL ratio change), and a 20% LDL-C difference. Even from the same draw, the NMR and LPP+ had a 15% difference in LDL-C results.
If you are going for advanced lipid testing, IMO the Spectracell LPP+, while expensive ($190 was the cheapest I could find online) and a PITA to order (you'll also want a phlebotomist familiar with Spectracell procedures or they will mess up), is the superior test. It includes insulin, homocysteine, hsCRP, apoB, apoA1, Lp(a), and is more granular with LDL and HDL sizes, and is the only US clinical test I could find gives you a lipid graph so you can look at the actual particle distribution (sample report). That being said, I think unless you're going to do regular followups with it, or know exactly what/why you are looking for, it's probably not worth it.
Oh also, I am APOE2/3, but have the PPARG polymorphism that suggests I might want to have more MUFAs, but ¯\_(ツ)_/¯
In terms of general cardiometabolic health (I don't have good RHR numbers since I switched devices last year), I think this 1yr comparison probably says more than the lipid panel does: https://imgur.com/J14cYNV

Fasting Stats

I started with an unintentional 24h fast, but basically aimed for a 16:8 (although often went 18-20 or longer simply due to not being hungry), with an occasional longer fast about once a quarter (first a 2 day, then 3, with an almost 4 day being my longest). There's a lot of suggestive research on the benefits of prolonged fasting, and it was something I was curious about being able to do. Here's my Zero stats: https://imgur.com/CFBAkGV

Ketone Testing

I tried out all the acetone and BHB testers for the first time at Low Carb Denver (where I wasn't eating quite my regular routine), but after morning sessions and at the end of a regular (16h) fast, was at about 1.2mmol and pushing out lots of acetone breath. Again, ¯\_(ツ)_/¯

Fitness

I started my first couple months without doing much physical activity, but about two months in, did decided I should have some fitness goals, with the aim of getting some functional strength. When I started, I was able to do 0 pull ups, and I'm up to about 7 now (if I try hard enough). I also went from 8 pushups max to 30, and I've started trying out diamond and some other more challenging variations now. YouTube started recommending me climbing videos a while back, and I've also joined a bouldering gym now as well.
I'm pretty averse to cardio training, but it turns out when you're carrying fifty less pounds, walking, hiking, and biking all becomes much easier, so I've noticed huge improvements in my excursions despite the lack of any cardio-focused workouts.

NSVs

I also kept a list of various NSVs so I don't forget just how drastically my health has changed from a year ago:

What's Next?

That was a lot longer than I expected, so if you've made it to the end, give yourself a high five! Also, if you made it this far, well, here's a before and after pic: https://imgur.com/KLcGJRE
This next year I'm looking to continue getting stronger, focusing on improving my sleep schedule, and continue trying to optimize my energy levels. As a stretch goal, it'd be nice to see if I can get to 15% BF, but I guess we'll see about that.
It also turns out that metabolic health research has become somewhat of a hobby and I've gone through thousands of papers at this point. I'm working on a more productive way to organize, synthesize, and share all that.
submitted by randomfoo2 to keto [link] [comments]

[Q] Sample size calculation for survival

I need to calculate sample size needed for a log rank test. I have a small dataset containing days to death in an experimental and control group to use as a postulated effect size. No censoring.
The median survival time in one of the groups is 160, in the other 105, and the total study length is 300. Now in order to calculate the sample size I need to put in a hazard ratio. I assume exponential survival and calculate the HR using the median survival time using this formula H = ln(2)/median survival time. So the resultant HR is (ln(2)/160)/(ln(2)/105) = 1.52.
However when I a cox model on this data (using survival package in R) the estimated HR is about 10 times this, 15. Indeed there is a very large difference between these groups if you plot them.
Why is my calculation so far off? Have I missed something?
submitted by Historicmetal to statistics [link] [comments]

[Q] sample size calculation for clustered survival analysis

Suppose I want to compare survival data for two groups of animals, treatment and placebo control. But many of treated animals will have the same donor and so are expected to be correlated. How would I go about calculating sample size for this study?
I initially wanted to plan for a logrank test comparing survival in the two groups, but since they cluster by donor, I needed to account for that. I considered a stratified logrank test (using powersurvepi package in r), but this requires you to assume the same hazard ratio in all strata. I feel like the hazard will differ by donor...
Any advice would be much appreciated.
submitted by Historicmetal to statistics [link] [comments]

A Market Research Analyst's Take on the DATA, the Tier List, and Balance.

After making this post about the official tier list, I received a number of comments regarding the value of the analytics and what can be known based on such data. Much of what was said was valid, although I think there is some confusion - or at least disagreement - about what should or can be drawn from such data, and how game balance should be assessed. As a market research analyst and engineer in my day job, I thought I'd give a professional opinion on what recent posted data suggests, the tier list and what it means, and how high level balance might be measured. For those of you who are not interested in getting into the statistical weeds, this post is probably not for you.
Some background on me for context: I graduated a few years ago from UCLA with a degree in Civil Engineering, but contracted as a data analyst conducting social media research under an Econ professor at UCSD during my last 2 years of college, and 1 year out of college. For the last 3 years I've worked as a product development engineer in the R&D department of a manufacturing company, where I specialize in market research and systems development/optimization.
Alright, now for the meaty bit.

WHAT DATA HAS BEEN GIVEN AND WHAT CAN BE DRAWN FROM IT

As many people have pointed out, the majority of the data we have been given by Ubisoft is without context and is largely useless for drawing combat balance conclusions. For example, the duel win/loss ratios posted serve only to inform us of where the aggregate population's duel tendencies currently are, which is not helpful from a combat balance perspective.
However, I would like to point out what it is useful for: casual player perception. Casual player perception is not useless. In fact, in many ways casual player perception is more important in the short term than combat balance, as the experience of the casual player, especially new players, drives revenue for the game. In this sense, the win/loss table posted by Ubisoft reflects the actual likelihood of each hero to pose a problem for a new player. This is objectively useful when attempting to balance the early game experience in PvP, despite that it is useless in terms of high level play.

THE TIER LIST AND WHAT IT MEANS

Recently, Kaiayos polled N=13 players who regularly participate in high level tournaments on what the win/loss rate of each hero matchup would be, out of ten matches. This data can be found here, and has been the source of some disagreement, mainly around the existing tier list and whether it is valuable. I'd like to address the value of both of these things together, as they revolve around a central question: are the opinions of high level players valuable?
In general, opinions are a poor measure relative to objective fact. When evaluating whether a person is guilty of a crime, for example, juror opinions at best align with evidence, and at worst are utterly subjective and biased. However, that does not mean they are strictly useless. In fact, opinions can correlate to reality very well provided the respondents have the following 3 qualities:
In these cases, even with cognitive biases, consensus tends to track reality quite well. Granted, the distinction between when these apply or not is not defined, however this is mainly because, as each of these 3 criterion increase, i.e. as the respondents are more informed or are more aligned, the correlation of the consensus to reality increases. With an infinite number of people as informed as is humanly possible, who are perfectly aligned on what their purpose in taking the survey is, there is a theoretical maximum on how closely their opinions match reality that is primarily limited to human cognitive biases and what is known about the subject. Sometimes these opinions are close enough to reality that the difference can be considered insignficant.
This dynamic is largely why capitalism works well in most cases; a large, informed body of consumers focused on product value are ideal adjudicators for market success. So the question is: does the survey data fit these 3 criterion? The answer depends on your standard of proof.
Given that we have no data other than this to work from, I'd say it's not terrible. The 13 survey respondents seem, to my eye/ear, to largely agree on which heroes perform best at a high level, and also to what degree. These opinions are backed primarily by scrimmages with players of similar talent, and tournaments. If there are no hidden biases skewing the data, and if the sample size is large, outliers within this set, such as if a specific person thinks Orochi is bottom tier, will not have much impact on the whole. The primary flaw with this data is its sample size. N=30 is required to reach P=0.05, and that's provided that the data is completely normal. If I recall correctly, N=13 puts the maximum power closer to P=0.2, which is not great.
If I had to hazard a guess based on my experience analyzing systems similar in complexity to For Honor's pvp data, I'd say that high level player opinions would at most correlate to real game data at around R2=0.9, so the tier list from Setmyx is, in my opinion, probably less than 80% accurate, although certainly not totally useless, as some have claimed. An R2 > 0.7 would be quite good under these circumstances.

HOW HIGH LEVEL BALANCE MIGHT BE MEASURED

If I had full access to Ubisoft's data, the way I would set about building an empirical tier list and balancing the game would start with defining the goal of analysis. In this case, I would state that the overall goal is to set the qualities and mechanics of each hero such that, given two players of equal skill to each other with every hero, the probability of a player winning with any specific hero given a randomly selected opposing hero is 0.5. This would mean that each hero is balanced against other heroes, and evenly skilled players, if matchups were randomly selected, would always win around half of the duels.
The reason this definition is important is that, should this be acheived, there would be no tier list that would make sense. Every hero would be relatively balanced against other heroes, such that a success-weighted average of each hero's matchups would result in a 50% adjusted win rate. Individual matchup dynamics could vary greatly under this paradigm, keeping the somewhat rock/papescissors aspect of this game that makes it unique.
To achieve this, the first thing to define besides the goal is the set of players to sample in order to study win probability. This is a trickier question, as there are a lot of factors that make it difficult to define what is a "high level player". The following obfuscate this:
The way to get around these issues is the following:
This should yield a very large number of connections, as all players on PC over a given set of weeks represents a large set of players. Naturally this should yield a higher number of matchups for heroes that are perceived as stronger, but with a large enough pool the heroes perceived to be weaker should still appear frequently.
For each player in this cluster, for each hero on the roster, calculate that player's win probability with that hero against every hero. This creates a matrix of win probabilities for each hero matchup for each player.
Now we need to create a success-weighted average of each player's performance with each hero. For each player, for each hero, for each fight with that hero, multiply that player's win results (either 1 or 0 for a win or loss, or, if the data is available, the exact number of rounds won over number of rounds played) with the opposing player's overall win probability with the hero the opposing player played. At the end of this step, for each player, for each hero, there should be long list of duel results weighted by the overall win probability of their opponent. Average this list.
For every hero, sort the players by average weighted win probability. Take the top 1% of players for each hero. This should yield a unique set of players for every hero that represent the top 1% of players with that hero. This process ranks the results of each player's match performance with each hero based on the probability that their opponent would win against a random opponent they have faced in the past, and represents a relative, but empirical, measure of the best players with each hero.
With these sets of players, the win/loss ratio of each matchup can now be averaged for each hero and used to create a high level player win percentage matrix. If a change to a character's kit brings the average of its win probabilities to 0.5, the change moved toward balance per our initial definition. Updating the list based on shifting player participation and bug fixes should be relatively easy, as only heroes who received a patch need to be updated (although it wouldn't hurt to update all of them).
Ubisoft: I'm here if you want to talk.
TL;DR:
EDIT: Just as a side note, if I was given an NDA, data that matches my criterion, and a weekend with my desktop at home, I could probably produce this data. I've gotten pretty good at rush-job python research code ;D
submitted by MrFanzyPanz to CompetitiveForHonor [link] [comments]

Channel Form Prediction of Chinda Creek: A Critical Factor to Sustainable Management of Flood Disaster in Port Harcourt, Niger Delta Nigeria- Juniper publishers

Abstract
Flooding have been identified by different scholars as a major challenge facing communities, hence this study examines the role of water bodies in the control and management of flood. The study was conducted in Chinda Creek in Ogbogoro section of the New Calabar River, Niger Delta Nigeria. Measurement of study variables was done, this was to identify the influence of velocity, sediment yield, depth and discharge on channel morphology. The channel length measured 643.275m and was divided into 30 sample points were measurement of the study variables were taken. The result from the correlation revealed that channel morphology of Chinda Creek is significantly correlated with discharge and depth. Nevertheless, it has positive correlation with velocity, bed load and suspended sediment load but their correlation were not significant. Multiple regression analysis was used and the result showed that only two variables, discharge and velocity provided 94.8% explanation for the variation in channel morphology. Hence the study recommended planned sand mining of the creek to increase its capacity for discharge as well as serve as a flood control mechanism in the study area.
Keywords: Flood; Disaster; Management; Sustainability; Channel; Prediction
Go to

Introduction

Stream channels have similar forms and processes throughout the world. Water and sediment discharge create channels as they flow through drainage networks. Obstructions and bends formed from resistant material can locally control channel form by influencing flow and sediment deposition [1]. In forest streams where structural elements such as woody debris, bedrock, and boulders are commonly abundant, these effects are particularly important.Sediment load, water discharge, and structural elements, the controlling independent variables of channel morphology determine the shape of the channel along the stream network. The form of any channel cross section reflects a balance between the channel’s capacity to carry sediment away from that point and the influx of sediment to that point. A stable channel is one whose morphology, roughness, and gradient have adjusted to allow passage of the sediment load contributed from upstream [2]. Characteristics of the banks also influence the cross-sectional shape of the channel and help to regulate channel width at any point in the stream.
Chinda Creek which is a tributary of the New Calabar River is an alluvial river, in that it flows through sands, silts, or clays deposited by flowing water [1]. Natural alluvial rivers are usually wide with an aspect ratio (width to depth) of 10 meters or greater [3] and the boundary can be moulded into various configurations as was demonstrated in the seminal work of Gilbert in Roberts [4]. With alluvial rivers, the channel geometry is influenced not only by the flow of water but by the sediment transported by the water. When the flow discharge changes, the sediment transport changes and, in turn, the channel geometry usually changes.Morphological change in stream channels may be a result of stream side forest harvesting. Millar [5] developed a model to predict stream channel morphology based on the condition of riparian vegetation. This model was tested on a portion of Slesse Creek (a tributary to the Chilliwack River) downstream of an old-growth area in the headwaters. The riparian area was extensively logged in the 1950s and 1960s, and has subsequently become parkland. The model predicted that in the presence of dense riparian vegetation, Slesse Creek would form meandering channel morphology, and that in the absence of dense riparian vegetation it would form a braided channel.These predictions were then confirmed using pre- and post-logging air photos.
However, corresponding changes in stream morphology may change stage discharge relationships and thereby increase or decrease peak flood stages [6]. Thus, predicting changes in base level and channel morphologies are important steps toward understanding future stream behaviors and risks.
A few key relationships describe the physics governing channel processes and illustrate controls on channel response. Conservation of energy and mass describe sediment transport and the flow of water through both the channel network and any point along a channel. Other relationships describe energy dissipation by channel roughness elements, the influence of boundary shear stress on sediment transport and the geometry of the active transport zone.
A common problem faced by geomorphologists is the identification of the dominant process responsible for creation of a particular form. Arising from this, it is the interest of the study to examine the influence of hydraulic parameters such as depth, discharge, velocity; bed load and suspended sediment load on channel morphology and also identify the major factors controlling morphological change in the area. The study therefore intends to develop a model which predicts channel morphology from hydraulic parameters with the intent to identify its role in sustainable flood disaster management.
Go to

Studies On Channel Form Prediction

Changes in channel morphology following large sediment inputs have been demonstrated in several regions. Lisle [7] showed a decrease in pool depths following a large flood and associated channel aggradation. Madej & Ozaki [8] quantified the decreases in both pool depth and frequency associated with a sediment pulse. The model for predicting morphologic change developed by Millar [5] indicated that Narrowlake Creek is a transitional watershed, but it was not sensitive enough to accurately predict the apparent shift from a meandering to a braided morphology.This reinforces the notion that stream side forest harvesting does affect stream channels in the Central Interior, though not necessarily in a way that can be readily predicted from hydrology models or empirical analysis. While it is impossible to quantify the exact amount of channel widening in Narrowlake Creek directly associated with forest harvesting, the cumulative effects of logging and natural disturbance have led to channel change throughout the logged portions of the watershed. The predictive model Millar [5] developed a tool for Slesse Creek, Canada and will be important for future prescription development in watersheds. However, for transitional systems like Narrowlake Creek in Vancouver, model predictions indicate that cautionary measures for either floodplain protection or restoration must be undertaken.
The linkages between logging activity and channel morphology are complicated. Predictive models have great value as tools that can be used to assist in successful watershed protection and restoration, but it will be important that they are not been used without watershed analyses, particularly in the case of transitional systems. The biological implications of the Millar (2000) model, as indicated by the Narrow lake Creek and Slesse Creek case studies in Canada, are profound and worth the effort of further analyses and adjustment to provide a useful tool for both watershed protection and restoration.
Similarly, Oyegun [9] in his study on channel morphology prediction using urbanization index, discharge and sediment yield of the upper Ogunpa River discovered that discharge was a major determinant of channel form and therefore was able to develop a model for channel morphology prediction using the above variables. This was also the same in the case of Oku [10] whose study revealed a significant correlation between discharge and channel shape and size of Ntawogba creek in Port Harcourt where discharge was the main determinant of channel morphology amidst several other variables.
As cited by Oku [10], Faniran & Jeje stated that the geology of a basin is a determining factor of channel shape and size characteristics, his work of the Rima basin revealed that despite discharge and other basin shape and size predicting and determining variables that channel geological characteristics determines the level of carving and enlargement of a channel.
Various studies carried out by several geomorphologist from both local and international have an agreement that channel form prediction as well as determinant variables seem to follow a trend irrespective of climatic conditions.
Go to

Method Of Study

The study was conducted in Chinda Creek in Ogbogoro town in Obio/Akpor Local Government area of Rivers State, which is located at latitude 4° 50’42.00’’N and longitude 6° 55’44.10’’. The community is about 1.37 kilometres away from the creek which lies at latitude 4° 50’2.43’’N and longitude 6° 56’6.26’’E. The total length of the creek to an adjoining creek called Okolo- Nbelekwuru is 1.93 kilometres, connecting to the New Calabar River, the total length is 3.04 kilometres.
Field studies and river measurement of Chinda Creek in Ogbogoro section of the New Calabar River was done. This was to enable the examination of the influence of velocity, sediment yield, depth and discharge on channel morphology. To do this, measurement of velocity, depth, discharge and sediment yield of the channel were taken. The length of the channel was determined with the aid of a measuring tape, and the channel measured 643 meters. This was divided 30 sample points as data collection points for the entire channel at an interval of 21.4m each.

Velocity determination

To determine the velocity of flow in the channel, according to the International Irrigation Management Institute report no T-7, several methods of velocity measurement were identified, but in the case of this study the two point method was used. This implies that instead of taking measurements on the water surface alone, velocity measurements was taken both on the surface and beneath, precisely at 0.2m and 0.8m respectively. This is because the flow depth of the river exceeds 0.76m [11].
Therefore velocity meter measurements were taken at 0.2 and 0.8m of the flow depth, d. This was done with the use of a digital water velocity metre. The mean velocity was obtained by averaging the velocities measured at 0.2 and 0.8m of the flow depth. Thus, the mean velocity V, in the reach would be:
📷

Determination of depth

To determine the depth, measurements were taken at each sample point in the channel. This was done with the aid of a calibrated leveling staff.

Discharge determination

To determine discharge, the principle to obtain the discharge per unit width (m2/sec) is to determine the product of mean velocity in the vertical per unit area. This method remains the same whether the measurements are carried out under permanent or non-permanent flow conditions. The total discharge of the channel was calculated from the measurement of velocity in the channel, noting that discharge per unit width q (m2/sec) which is the product of mean velocity in the vertical (m/sec) and the water depth (d) at the vertical at the moment of measurement.
Therefore discharge Q =VA (2)
Where V =mean velocity, A = cross sectional area.

Bed load measurement

To measure the bed load, the Handheld Bedload- US BLH-84 sediment sampler was used [12]. The reason for the choice is that it is mechanically simple, and can be used at depths up to 3m. To carry out the measurement, it was done at each sample point in the channel. To calculate this, the sediment transport formula been put forward by Chang et al [12] was used.
📷
In which,
gb = transport in kg/s
wi = weight of bedload sample in kg.
wi = weight of bedload sample in kg.
hs = width of sampler nozzle in meter
b = section width of the stream in meter.

Suspended sediment yield

To measure the suspended load as well, the Depth-Integrating Suspended-Sediment Wading Type Sampler Model DH 48 was used. The sampler container is held in place and sealed against a rubber gasket in the sampler head, by a hand-operated springtensioned clamp at the rear of the sampler. This when immersed into the channel was removed after every 5 minutes and emptied into separate clean used bottled water containers for the 30 sample points. The content was there after filtered to determine the weight of the clastic particles in the water sample. since the sampler have a volume of 470cc, the researcher ensured that the volume of water collected did not exceed 440cc but fell within the range of 375cc to 440cc. To achieve this, enough time was given during submergence of the sampler to ensure that the volume of the sampled water falls within the acceptable standard.

Channel morphology

The ultimate goal of the data collection process was basically to access the relationships between the various independent variables of discharge, velocity depth, bedload and suspended sediment yield on one hand and the channel morphology on the other hand. The data set of Chinda Creek was collected with the aid of a calibrated leveling staff and measuring tape. Within the context of the present study, channel morphology, which is the shape of the channel, refers to the cross sectional area of the channel at various sampled points of the basin. In order words, the average channel width and depths were measured and their products were stated in square metre. This was done using Cuencia [13] formula for estimating cross sectional area.
Area = width x depth (4)
The cross sectional area of the thirty (30) sample points was determined.
However, from the data generated the mean cross sectional area of the channel was 10.7223 with a standard deviation of 3.70872.

Data analysis

Tables and charts were used in the presentation of data while in the analysis bivariate and multivariate analytical techniques (Correlation matrix and multiple regression analysis) were used. The model equation of the stepwise multiple regression analysis is as follows:

Antifungal resistance

Y = a + b1X1 +b2X2+ b3x3 + b4x4 + b5x5 + e ……… (5)
Y = Channel Morphology
a = regression constant
b1 - b5 = regression co-efficient
X1 = velocity
X2 = Depth
X3 = Discharge
X4 = Suspended load
X5 = Bed load
e = error term
Go to

Results And Discussion

Pair-wise correlation between hydraulic parameters of chinda creek

This section examined the predictive capacity of the hydrological parameters of discharge, velocity, depth, suspended sediment yield and bed load of channel morphology in Chinda creek using the SPSS multiple regression (R) statistical tool.
Below is a correlation matrix table which identifies the relationship between the dependent variable of channel morphology and the independent variables of velocity, depth, discharge, bed load and suspended sediment yield (Table 1).
📷
(*0.05 significant level).
The above shows the correlation matrix of five independent variables of velocity, depth, discharge, bed load and suspended load on the dependent variable of channel morphology of Chinda Creek in Ogbogoro. The testing of various relationships are shown in the summary on Table 1. With the student “t” statistic at 0.05 significant levels revealed that the channel morphology of Chinda Creek is significantly correlated with discharge and depth. Nevertheless, it has positive correlation with velocity, bed load and suspended sediment load but their correlation are not significant [14,15].
The finding of the study is of importance to geomorphological studies, such that even though velocity, bed load and suspended sediment load does not significantly correlate with channel morphology of Chinda Creek, it indirectly contributes to the existing channel form. In other words, discharge is partly a function of velocity.
Table 2 above, shows that only two variables discharge and velocity entered the regression equation. Discharge alone provided 59% explanation for variation in channel Morphology for the study creek while velocity accounts for 35.8% of same. Hence the total explanation provided for the variation in channel morphology by the independent variables of discharge and velocity is 94.8%.
📷
Source: SPSS Analysis result
In conclusion, this study has revealed that discharge and velocity are the predictors of channel morphology in Chinda Creek. It should also be noted that suspended sediment yield, bed load and depth are indirect predictors of channel morphology in Chinda Creek. This is because they correlate positively with channel morphology and also have positive correlation with velocity and discharge which are the direct predictors, with net effect resulting in increased velocity and discharge.
More so, the five independent variables of the study directly or indirectly affect channel morphology of Chinda Creek. This shows that channel morphology of Chinda Creek correlates positively with discharge, velocity, depth, suspended sediment yield and bed load.
The stepwise multiple regression as shown in Table 3 above revealed that discharge and velocity explains the change in the channel morphology. This is because it accounted 94.8% change in the channel morphology.
📷
Thus, the hypothesized model developed by this study is of the form,
Y =10.348 + 1.312X1- 0.808X2 ----------- (6)
Where,
Y = Channel morphology
X1 = Discharge
X2 = Velocity
In order to determine the significance of this relationship, Table 4 below was used.
From the Table 4 above, the analysis of variance chart above shows two independent variables that significantly explain variation in Chinda Creek morphology, jointly explained about 94.8% of the variation of channel morphology of Chinda Creek. Given an F calculated value of 246.68 which is greater than the table value of 3.35, reveals that discharge and velocity influence channel morphology of Chinda Creek. This therefore implies that channel morphology is influenced by hydraulic parameters.
📷
(*0.05 significance level).
Go to

Conclusion And Recommendation

The analysis showed a positive correlation between discharge and channel morphology. The relationship was statistically significant at 95% level. The multivariate technique used in the SPSS computer programme of the step-wise multiple regression analysis revealed that discharge was the most single predictor of Chinda Creek morphology as it explains 59% of the variation in the existing channel morphology of Chinda Creek.
From the analysis of the study, the developed a model helped in predicting channel morphology using suspended sediment yield, bed load, velocity, discharge and depth, which is of the form:
Y =10.348 + 1.312X1- 0.808X2 -------------- (7)
Where,
Y = dependent variable (channel morphology)
X1 = discharge (independent variable)
X2 = velocity (independent variable)
One of the findings of the study is that the channel has high discharge. It also revealed that discharge and velocity are the major predictors of the channel form, with discharge providing 59% of the variation in channel morphology of Chinda Creek. The implication of this is that discharge has helped in the clearing of the creek a tributary to a major river the New Calabar River. Velocity also provided 35.8% of the variation in channel morphology of Chinda Creek, this has contributed immensely to increasing the rate of flow in the channel and the amount of water the channel discharges.
The study therefore recommends that a planned sand mining of the creek should be done, to ensure that it has more capacity for discharge as well as serve as a flood control mechanism in Ogbogoro community noting its role in the control of flood within the rural catchment. This will also allow traffic flow for water transportation while generating revenue for the Government and the community through the sand mining process. With the growing demand for land space especially within rural catchments, exposure of the earth surface as well as concretization of the surfaces are possible, hence the the tendency to increase surface run off of the area. There is therefore need for annual and bi-annual study of the state of the streams, creeks and other water bodies within rural catchment to determine their role in flood control as a means to curb the menace of flooding which is a major environmental hazard in the Port Harcourt region.
To Know More About Journal of Oceanography Please Click on: https://juniperpublishers.com/ofoaj/index.php
To Know More About Open Access Journals Publishers Please Click on: Juniper Publishers
submitted by JuniperPublishers-OF to u/JuniperPublishers-OF [link] [comments]

Socioeconomic Variables Associated with Level of Obesity and Prevalence of Other Diseases Among Children and Adolescents of Some Affluent Families of Bangladesh| Lupine Publishers

Socioeconomic Variables Associated with Level of Obesity and Prevalence of Other Diseases Among Children and Adolescents of Some Affluent Families of Bangladesh| Lupine Publishers

Lupine Publishers| Journal of Diabetes and Obesity

Abstract

Go to
The present study utilized the data collected from 662 children observed from 560 randomly selected families of students of American International University-Bangladesh. Among the investigated children and adolescents 465 were in underweight group. Obesity and severe obesity were observed among 9.1 percent children and adolescents and prevalence of diabetes was observed among 22.8 percent respondents. The percentage of respondents affected by different diseases except diabetes was 13.4. It was evident that level of obesity, prevalence of diabetes and prevalence of other diseases were significantly associated, and level of obesity was associated with different socioeconomic characters of the parents and of the respondents. Parents’ education, age of children, family income and food habit of the children were the most important variables for the change in level of obesity of children and adolescents. Fitting of logistic regression using level of other diseases as dependent variable showed that residence, parents’ education, family income, prevalence of diabetes, food habit of the children, blood sugar level of children, utilization of time by the children were some of the variables responsible for prevalence of other diseases among the children.
**Keywords:**Level of obesity; Socioeconomic variables; Significant association between diabetes and level of obesity; Logistic regression

Introduction

Go to
Child overweight and obesity are the most serious public health challenges of the 21st century worldwide , specially, in low and middle-income countries. It affects mostly the urban people [1,2]. The obesity for child and adolescents is measured by body mass index [BMI = weight in kg / (height in meter2] , where children having BMI above the 85th percentile are considered as overweight and those who have BMI above the 95th percentile are considered as obese [3]. The level of obesity is increasing at alarming rate. In a global study in 2016 it was estimated that over 41 million [1] children under the age 5 years were overweight. The prevalence of overweight in adolescents is defined by BMI + standard deviation of BMI and obesity for them is decided by BMI + 2 standard deviation [1]. Overweight and obesity are defined as abnormal or excessive fat accumulation that presents a risk to health. Child obesity can lead to life threating conditions including diabetes, heart disease, sleep problems, cancer, liver disease, early puberty, eating disorders, skin infection, asthma and other respiratory problems [4]. The adolescent’s obesity leads to the problem of hepatitis, sleep aponia, and increased intercranial pressure [5].
The other effects of overweight and obesity are psychological [6], depression [7], physical [8,9]. The early physical effect of obesity in adolescence is noted when it affects all most all the organ which leads to the increase rate of mortality in adulthood [4,10]. The causes of childhood obesity are genetic. Over 200 genes affect weight by determining activity level, food preferences, body type, and metabolism [4]. Family practices such as decreasing rate of breast feed, by mothers, stay home and utilizes electronic devices, less physical activity, food habit, specially, taking more calorie food from restaurant with less fiber [4,9,11], low socioeconomic status [12], eating habit of calorie- rich drinks [13]. However, the problem can be obviated by avoiding the sources of causes of obesity and encouraging the children and adolescents to be involved in physical activities. From the above discussion, it can be concluded that growing level of obesity among children and youth and increasing the rate of prevalence of diabetes are of great concern throughout the world. Many of the complications are silent and often go undiagnosed. The obese children are at high risk for the development of early morbidity. Considering all these aspects discussed above, the objective of the study was planned to observe the joint relationship of level of obesity and prevalence of diabetes including prevalence of other diseases with other socioeconomic factors which were more responsible for the variation in the level of obesity among children and adolescents under 18 years of age coming out from affluent families. The specific objective was to investigate the association of level of obesity of children and adolescents with some social factors. Also, to identify the responsible variable for other diseases.

Methodology

Go to
For the analysis it was decided to collect information from children of affluent families. In a separate [14,15] families of students of American International University - Bangladesh were identified as affluent families. Hence, it was planes to collect data from some randomly selected families of students of the abovementioned university. In a previous study [16] it was reported that there were 7% overweight and obese children and youth in Bangladesh. Accordingly, we had decided to have a proportion of at least 7% overweight and obese children and youth with margin of error of 2% at 95% confidence. Accordingly, for a simple random sample the sample the calculated value of sample size was an=625. This sample size covered 6.6% students of the university. The sample students were selected by simple random sampling method and were expecting at least responses from 5% families of the students. However, information was received from 560 families, covering the data of 662 children.
The data were collected through pre-designed and pre-tested printed questionnaire covering the questions related to the demographic characteristics of the children and adolescents of age below 18 years and the questions related to the socioeconomic variables of the parents. The randomly selected students were given written instructions how to collect information and they were requested to help in collecting information from their parents, who were very much concerned about the health hazard of their offspring. The parents of children filled in the questionnaires as some of the children were under 18 years of age and some were even below 10 years. The important collected information was age, height, weight, sex, food habit, time spent, involvement in co-curricular activities, if it is feasible, of the children, and information regarding the prevalence of any other diseases. To study the socioeconomic background of the children, the information regarding parent’s level of education, occupation and income were also collected. For youth having diabetes, the latest blood sugar level measured by registered practitioner or measured in a registered clinic also recorded. Association of level of obesity of offspring with families’ socioeconomic background were examined using chi-square test, where significant association was concluded when p-value ≤0.05. Logistic regression model using levels of prevalence of other diseases as dependent variable was fitted.

Result and Discussion

Go to
The present analysis was done using the data of social, medical and economic aspects of 662 children of age less than or equal to 18 years investigated from 560 randomly selected families of the students of American International University – Bangladesh. From the analysis it was noted that 22.8 percent children were affected by diabetes. This percentage among the obese and severe obese group of children was 32.2 indicating that level of obesity and prevalence of diabetes was significantly associated
[ χ2 =8.741, p-value = 0.033, Table 1]
Considering the prevalence of diabetes among the obese and severe obese group compared to non-obese group, the former group were 69 percent more exposed to the problem of diabetes [O.R = 1.69]. Their risk ratio was 1.47 compared to non-obese group. Amongst the investigated children 70.2 percent were in underweight group and 9.1 percent were in obese and severe obese group. The level of obesity was measured by the amount of BMI (weight in kg / height in m2). The mean value of BMI was 17.67 with a standard deviation 10.58. The underweight group of children and adolescents had BMI <23. The BMI of other three groups were 23 - <30, 30 - <45 and 45+. The levels of BMI were decided according to the percentile values. This finding was almost similar to that observed in another study [15]. Amongst the observed children and adolescent’s 78.1 percent were in the age group 10 years and above and 70.2 percent of the investigated children and youth were in underweight group and 9.1 percent were obese and severely obese. Majority [ Table 2, 78.2%] of them were in the age group 10 years and above and among them 72.6 percent were in underweight group. In these group obese and severe obese children were 6.9 percent. Major obese and severe obese children (19.4%) were among the children of age 5 to less than 10 years. The differences in the proportions of levels of obesity according to different age groups were significant [ χ2 = 39.043, p- value = 0.000]. The prevalence of obesity and severe obesity among the children of age group 5 to less than 10 years compared to children of other age groups were too high [O.R = 13.06]. Let us investigate the prevalence of diabetes and prevalence of other diseases among the children and adolescents. It was observed that 86.6 percent investigated children had no other health hazard (Table 2) except diabetes. However, 21.2 percent of them were diabetic patients. Among the diabetic patient’s 11.3 percent had eye problem. The major problem among the respondents was eye problem. The percentage of this group of children was 7.6. There were significant differences in the percentages of respondents facing health hazard according to prevalence of diabetes [χ2 = 10.957, p-value = 0.027].
Now, let us investigate the reason of obesity and severe obesity among the children and youth. Some of the social factors might have enhanced the level of obesity. This was noted from the study of association of different factors and level of obesity. The investigated children and adolescents were classified into three classes by their age levels. These three groups of children were again classified by their level of obesity. The classified results were shown in Table 3. It was seen that 72.5% children and youth of the age group 10 years and above were underweight. The proportions of underweight children of other two age groups were lesser than the percentages of overall underweight group of children. The children less than 5 years of age had the highest percentage of the overweight group and this group of children had the 58 percent [O.R.=1.58] more chance of overweight compared to other groups of children. This differential in proportions of level of obesity according to age groups was highly significant [ χ2 = 38.94, p-value=0.000]. Amongst the studied children 58.2 percent were males (Table 4) and 77.4 percent of them were underweight. The corresponding figure among females is 60.3 percent. The differential in obesity by sex differences is significant [ χ2= 44.03, p-value= 0.00]. Number of children of different levels of obesity belonging to different residential areas were presented in (Table 5). It was seen that maximum village children (76.5%) were underweight compared to urban semiurban children. Again, among the village children, number of obese and severe obese groups were lower compared to other groups of children. The differences in proportion. The information of 72.5% children were reported from urban area. The corresponding percentages of rural and semi-urban children were 18 and 9.5. The classified information of level of obesity and residence of children were significantly different [ χ2 = 12.45, p-value= 0.04]. Similar findings were observed in other studies [14,15] It was already mentioned that the study group of children were mostly living in city center (72.5%) and though they had the enough scope to be involved in physical activities like games and sports, still majority of the children (39.9%) passed their time by watching television and 16.8% slept after or before their academic activities. One-fourth (26.4%) of the investigated children mentioned that they were involved in some other activities including games and sports (Table 6). Around 72% severe obese group killed their time by watching television. The corresponding percentage among obese group is 45.2. The differentials in proportions of utilization of time by the children of different obese groups were significantly different as [χ2= 54.12 with p- value = 0.00]. Let us now observe the food habit of investigated children and adolescent. As the investigating units were mostly from affluent city residence, they had the scope to get sufficient foods, with proper hygienic measures. Among the investigating units’ 47.9 percent were habituated in taking food from restaurants. Among the obese children 54.7 percent were habituated in takin restaurant food (Table 7).
In a separate study [15] it was reported that the increasing trend of obesity was associated with fast food from restaurant. Of course, higher proportions of underweight (46.9%) and overweight group of children (53.3%) were habituated in taking restaurant food. However, the differentials in proportions of children taking restaurant food according to different levels of obesity were significant [ χ2 = 94.63 with p-value = 0.00]. Usually the children of affluent families were more likely to be stay back in the house and kill time by watching television. These children also had more chances to frequently visit fast food shops. Their parents could afford the cost of fast foods and they were also fulfilled the demand of their children if they had sufficient family income. It was observed that the monthly family income of 38.2% families was 70 thousand and above taka [ Bangladesh currency] but 79.1 % children of these families were in underweight group (Table 8). It was seen that prevalence of obesity was higher among the children of low-income group of families. This differential in observing obesity was significantly different among the low-income group of families [ χ2 = 53.06 with p-value = 0.00]. Family environment was one of the correlates of obesity among children [16]. It seemed that family environment was influenced by parents’ education and occupation. Let us investigate how fathers’ and mothers’ education were associated with children and adolescent’s obesity. It was seen that (Table 9) the fathers of 77.9 % children were higher educated and 75% children of them were underweight. The percentage of illiterate fathers was 3.5 and 91% children of these fathers were underweight. But obesity and severe obesity among children of illiterate and primary educated fathers were more (8.7 and 17.4% respectively) compared to the children of secondary educated (2.1%) fathers. The differential in proportions of level of obesity and fathers’ educational level were highly significant [χ2 = 111.70 with p-value = 0.00]. Similar significant differentials in proportions of obesity of children according to the differences of mothers’ education were also observed [Table 10 χ2 = 39.23 with p-value = 0.00].
There were 5.1 percent agriculturists and 79.4 percent children of them were underweight. The lowest underweight children were observed in those families where father was engaged in profession other that business and service. The maximum underweight children were observed in the families where father was a serviceman. The differential in proportions in different levels of obesity by father’s occupation was significant [χ2 =67.281, p-value=0.000, Table 11]. However, mothers occupation had no impact on level of obesity of children and adolescents [ χ2 =6.279, p-value=0.393, Table 12].
Association between level of obesity and some social characters were studied by chi-square test. Here impact of social variables on obesity and prevalence of diabetes were not studied. It was done in some other study [19]. However, the impacts of social factors on prevalence of other diseases were studied. It was done by fitting the logistic regression model assuming levels of other diseases as dependent variable. The explanatory variables used were residence, religion, age, parent’s education, parent’s occupation, family income, food habit of children, utilization of time by the children, blood sugar level of them and body mass index. However, all the variables were not used in fitting the final model because during observing model fitting criteria some variables were found insignificant. These variables were age of children, gender of children and parents’ occupation. The analytical results were shown below: From the fitted model it was noticed that prevalence of diabetes, level of obesity, and residence were the responsible factors for the prevalence of other diseases. The analysis was done considering no disease as reference factor Table 13. Thus, Model fitted results were available for the remaining 4 types of diseases. However, due to insignificant results in fitting the model for the diseases like kidney problem, hypertension and some other diseases the results were not presented. Results were presented only for the disease eye problem. This problem was prevailed among 7.6 percent respondents and this group was the biggest (56.2 %) among the children and adolescents who were experienced of different diseases. The fitted model was significant as was observed by the statistic – 2loglikelihood =364.489 and the corresponding χ2 = 1696.824 with p-value =0.000. The value of Cox and Snell R2=0.923 and Nagelkerke R2= 0.961.

Conclusion

Go to
The present study was conducted to observe the level of obesity, prevalence of diabetes and prevalence of other diseases among children and adolescents of some randomly selected families of the students of American International University-Bangladesh. Most of the families were city dwellers and these families were socially and economically in better position [17-19] compared to the general people of Bangladesh. However, the obesity and severe obesity among children were similar to that of the general people of the country. Obesity and severe obesity were associated with the parent’s social and economic status. Occupation, family income and age of children and youth were the most important factors to influence the level of obesity. The study indicated that prevalence of diabetes was dependent on level of obesity and both these characteristics were the problem for both parents and health planners. Parents can take care of foods of their offspring and motivate to take home foods as per as possible avoiding the restaurant foods Table 14. They can motivate their kids to spend their time in doing some activities related to physical work in addition to their academic works. Logistic regression model was fitted to identify the responsible variable for the prevalence of other diseases. It was observed that family income, prevalence of diabetes and level of obesity were some of the responsible variables for the diseases.
Problems of obesity are manifold. It is a life threating condition which can enhance diabetes, heart disease, cancer, liver disease, skin infection, asthma, and other respiratory problems [20,21]. Obese adolescents have increased chance of mortality during adulthood [22]. The problem also arises from the social environment. Hence, some measures need to be taken to control the problem. Government and school authority should introduce some regulations so that physical education is a compulsory cocurricular activity of the school. Parents can encourage their kids to avoid watching television and untimely sleeping and they can provide the quality school lunches. Kids should be provided fresh and healthy food and they should be accompanied to parks and to play field or walkways. They can be advised to avoid sedentary activities like use of mobile phone, computer, video games. Some of the steps of parents can prevent the alarming increase in the rates of obesity and severe obesity and in the rate of prevalence of diabetes and prevalence of other diseases.
For more Lupine Publishers Open access journals please visit our website
https://lupinepublishers.us/
For more Diabetes open access journal please click here
https://lupinepublishers.com/diabetes-obesity-journal/
Follow on Twitter : https://twitter.com/lupine_online
Follow on Blogger : https://lupinepublishers.blogspot.com/
submitted by Lupinepublishers-ADO to u/Lupinepublishers-ADO [link] [comments]

Question regarding statistical power and sample size?

Hey guys, I was wondering if anyone could shed some light on this statistical analysis part for a clinical trial we are currently studying. It states:
We determined that a sample of 4150 patients would provide the trial with a power of 90% to detect a hazard ratio of 0.75 with a two-sided alpha level of 0.05 on the basis of an event rate of 15% in the aspirin-only group. The sample was inflated to account for two interim analyses of the primary efficacy outcome with the use of an O’Brien–Fleming spending function. The spending-function approach allowed for additional efficacy interim analyses to be conducted at the request of the data and safety monitoring board while maintaining the type I error rate. On the basis of the observed event rate in the aspirin-only group at the first interim analysis, the sample was increased to 5840 patients to provide the trial with a power of 80% with other variables remaining unchanged in the calculation.
Of the basis of what we got taught, we were a bit confused as to why the power decreased from 90 to 80 after increasing the sample size. This is probably a basic concept and I’m probably missing something obvious but any help would be greatly appreciated!
submitted by NiF1997 to statistics [link] [comments]

Lupine Publishers| Selective Androgen Receptor Modulators (SARMs): A Mini-Review

Lupine Publishers| Selective Androgen Receptor Modulators (SARMs): A Mini-Review

https://preview.redd.it/8i7nt3uwe2s31.jpg?width=744&format=pjpg&auto=webp&s=d93d41821a9baa4052bee2ff9b01ef8df15350fe
Lupine Publishers| Journal of reproductive

Abstract

Selective Androgen Receptor Modulators (SARMs) were discovered in the late 1990’s.They may have an application in treatments of various diseases, including muscle wasting, cancer cachexia, breast cancer, osteoporosis, andropause and sarcopenia. In this minireview the development, pharmacodynamics, and the phase 1 and 2 trial results of the SARMs are discussed with a special emphasis on the illicit use of the SARMs.

Introduction

The androgen receptor (AR) is a member of the steroid hormone receptor family that plays important roles in the physiology and pathology of diverse tissues.AR ligands,which include circulating testosterone and locally synthesized dihydrotestosterone bind to activate the AR to elicit their effects.Ubiquitous expression of the AR metabolism and cross reactivity with other receptors limit broad therapeutic utilisation of steroidal androgens.However,the discovery of selective androgen receptor modulators (SARMs) provide an opportunity to promote the benificial effects with greatly unwanted side effects. In the last two decades SARMs have been proposed as treatments of choice for various diseases, including muscle wasting, breast cancer and osteoporosis. In addition, they may have an application in andropause, sarcopenia, cancer cachexia and as selective anabolic steroids in performing body building sports [1-6]. In this mini-review the development, pharmacodynamics and the phase 1 an2 trial results of the SARMs are discussed,with a special emphasis on the illicit use of the SARMs.

Development of SARMs

Synthesized steroidal androgens due to their ability to mimic the actions of their endogenous counterparts have been used ckinically as valuable therapeutic agents to target a variety of male and female disorders resulting from androgen deficiency. The principle clinical indication of androgens is as replacement therapy for hypogonadal men [1,2]. Other documented clinical uses of androgens include delayed puberty in boys, anemias, primary osteoporosis, heriditary angioneurotic edema, endometriosis, estrogen receptor-positive breast cancer and muscular diseases, as Duchenne’s muscular dystrophy [3-6].
Since the discovery of the therapeutic benefits of testosterone in the 1930’s a variety of androgen preparations have been introduced and tested clinically.
Unfortunately, all current available androgen preparations have severe limitations [2,6]. Unmodified testosterone is impractical for oral administration due to its low systematic bioavalability [7]. Testosterone esters (e.g., testosterone propionate and testosterone enanthate) are presently the most widely used testosterone preparations, usually administered by intramuscular injection in oil-vehicles [8,9]. A prolonged duration of action is achieved with these esters. However,they produce highly variable testosterone levels. 17-alpha alkylated testosterones (e.g., methylteststerone and oxandrolone) can be given orally. Nevertheless,they often cause unacceptable hepatotoxicity and are less efficacious; hence they are not recommended for long-term androgen therapy [9-11].
At the end of the 1990’s studies with affinity ligands for the androgen receptor started. The discovery of these nonsteoidal androgens offered an opportunity for the development of a new generation of selective androgen receptor modulators (SARMs) superior to current androgens. Theoretically, SARMs are advantegeous over their steroidal counterparts in that they can obtain better receptor selectivity and allow greater flexibility in structural modification. Thus SARMs can potentially avoid the undesirable side effects caused by cross-reactivity and achieve superior pharmacokinetic properties [12].

Pharmacodynamics of SARMs

Structural modifications of the acryl propionamide analogues bicalutamide and hydroxyflutamide led to the discovery of the first generation SARMs. The compounds S1 and S4 in this series bind AR with high affinity and demonstrate tissue selectivity in the Herzberger assay,that utilizes a castrated rat model [13-20]. Both S1 and S4 prevented castration induced atrophy of the levator ani muscle and acted as weak agonists in the prostate. At a dose of 3mg/kg/day, S4 partially restored the prostate weight to <20% of intact weight, but fully restored the levator ani weight, skeletal muscle strength, bone mineral density, bone strength and lean body mass and suppressed LH(luteinizing hormone) and FSH( follicle stimulating hormone) [20,21].
S4 also prevented ovariectomy-induced bone loss in a female rat model of osteoporosis [22]. The ability of SARMs to promote both muscle strength and bone mechanical strength constitutes a unique advantage over other therapies for osteoporosis, that only increase bone density. S1 and S4 are partial agonists thus in intact male rats [20,21]. S1 and S4 compete with endogenous androgens and act as antogonists in prostate, such SARMs with antagonistic or low intrinsic activity in prostate might be useful in the treatment of benign prostate hyperplasia (BPH) or prostate cancer. The suppressive effects of this class of SARMS on gonadotrofin secretion in rats suggests a potential application for male contraception [21]. The ether linkage and B-ring para-position substitution are critical for agonist activity of the acryl propionamide SARMs [19]. Based on crystal structures, compounds with ether linkage appear to adapt a more compact confirmation than bicalutamide due to formation of an intramolecular H-bond, allowing the B-ring to avoid steric conflict with the side chain of W741 in AR and potentially explaining the agonist activity [23].
The hydantoin derivatives developed by the BMS group have an A-ring structure that is similar to that of bicalutamide. The cyanonitro group of these molecules interact with Q711 and R752 [24- 26]. The benzene ring or the naphtidyl group,together with the hydantoin ring overlaps the steroid plane, while the hydantoin rings forms a H-bond with N705.BMS-564929 binds AR with high affinity and high specificity. BMS-564929 demonstrated anabolic activity in the levator ani muscle and a high degree of tissue selectivity as indicated by a substantially higher ED50 (Effective Dose for 50% of the population receiving the drug) for the prostate. Hydantoin derivatives are potent suppressors of LH. BMS=564929 is orally available in humans with a half-life of 8-14 hours. The prolonged half-life of these ligands in rats may explain the lower dose needed to achieve pharmacological effects. Differences regarding in vivo activities of SARMs, that share similar binding affinity and in vitro activity,may be related to the differences in pharmacokinetics and drug exposure [27].
Hanada et al. [28] Pharmaceutical Co. reported a series of tetrahydroquinolinone derivatives as AR agonists for bone. Although these compounds displayed high AR affinity and strong agonist activity in prostate and levator ani,they demonstrated little selectivity between androgenic and anabolic tissues [27]. Significant in vivo pharmacological activity was only observed at high subcutaneous doses [27,28]. Ligand Pharmaceuticals developed LGD 2226 and LGD 2941, that are bicyclic 6 anilino quinolinone derivatives, showing anabolic activity on the levator ani muscle as well as on bone mass and strength, while having little effect on prostate size in a preclinical rodent model [29-31]. LGD 2226 was also shown to maintain male reproductive behavior in the castrated rodent model [30].
Scientists at Johnson and Johnson replaced the propionamide linker with cyclic elements such as the pyrazoles, benzimidazoles, indoles and cyclic propionanilide mimetics [31]. Merck scientists have developed a number of 4-azosteroidal derivatives and butanamides [32]. All the above mentioned SARMs belong to the so called “first generation SARMs”. The mechanisms that contribute to the tissue specific transcriptional activation and selectivity of biologic effects of the SARMs remain poorly understood.Three general hypotheses have been proposed, although these hypotheses are not mutually exclusive.
a) The coactivator hypothesis assumes that the repertoire of coregulator proteins that associate with the SARM-bound AR differs from that with testosterone-bound AR leading to transcriptional activation of a differentially regulated set of genes.
b) The conformational hypothesis states that functional differences in ligand classes (agonist, antagonists and SARMs) are reflected into conformationally distinct states with distinct thermodynamic partitioning. Ligand binding induces specific conformational changes in the ligand binding domain, which could modulate surface topology and subsequent proteinprotein interactions between the AR and other coregulators involved in genomic transcriptional activation or cytosolic proteins involved in non-genomic signalling. Differences in ligand-specific receptor conformation and protein-protein interactions could result in tissue-specific gene regulation, due to potential changes in interactions with the AR effectors, coregulators or transcriptional factors.
c) The third hypothesis states that the tissue selectivity of SARMs could also be related to differences in their tissue distribution,potential interactions with 5-alpha reductase or CYP19 aromatase or tissue specific expression of coregulators [33]. Testosterone actions in some androgenic tissues are amplified by its conversion to 5-alpha dihydro testosterone [34]. Nonsteroidal SARMS do not serve as a substrate for 5-alpha reductase. Tissue selectivity of SARMs might be related to tissue specific expression of coregulatory proteins. Similarly, some differences of the action of SARM of testosterone could be related to the inability of nonsteroidal SARMs to undergo aromatization.

Preclinical and early clinical trials with SARMs

A large number of candidate SARMs have undergone preclinical proof of concept and toxicology studies and have made it iinto phase 1 and phase2 clinical trials [29,35]. These compounds are being positioned for early efficacy trials for osteoporosis,frailty,cancer cachexia and aging-associated fundamental limitations. The use of SARMS for the treatment of androgen defiency in men has been proposed. However, the relative advantages of SARMs over testosterone for this indication are not readily apparant. Many biological features of testosterone, especially its effects on libido and behavior, bone and plasma lipids require its aromatization to estrogen. Because the currently SARMs are neither aromatized nor 5-alpha reduced, these compuonds would face an uphill regulatory bar for FDA approval, as they would be required to show efficacy and safety in many more domains of androgen action, than has been required of testosterone formulations.
While the FDA regulatory pathway for the approval of drugs for osteoporosis has been well delineated, because of precedence set by previously approved drugs, the pathway for approval of function promoting anabolic therapies has not been clearly established. Efforts are underway to generate a consensus around indications, efficacy outcomes in pivotal trials, and minimal clinically important differences in key effective outcomes These efforts should facilitate efficacy trials of candidate molecules. There are 2 types of administering SARMs: orally or in injectable dosages. Well known SARMs are LGD-4033, Ostarine (MK-2866), S4(Andarine), RAD 140, Cardarine(GW 501516) and SR9009. The last two preparations are usually grouped with SARMs, but are not the same and are used as endurance supplements.SARMs have been prohibited by the World Anti-Doping Agency (WADA) since 2008. SARMs have the potential to be misused for performance enhancement in sport due to their anabolic properties, as well as their ability to stimulate androgen receptors in muscle and bone. THey are currently prohibited atall times-in the category of “other anabolic agents” under section S1,2 of the WADA Prohibited List [36] Full clinical FDA approval for human consumption as prescription drugs has not yet been accomplished for any of the SARMs until now.

Ligandrol (LGD-4033)

Ligandrol is a SARM discovered by Ligand Pharmaceuticals and currently under licensed development by Viking THerapeutics [37]. There has been a lot of research into the efficacy of SARMs, but very little published research to date on LGD-4033. Ligandrol has exhibited desirable in vivo efficacy on skeletal muscle and bone measurements in animal models of disease. There is only one published study on the effects of LGD-4033 in humans, as well as phase B1 clinical trial results. A 2010 phase1 clinical trial was the first study in humans of LGD-4033 and evaluated the safety, tolerability and oharmacokinetic profiles of the molecule in a single escalating dose, double-blind, placebo-controlled study in 48 healthy volunteers [38].
In 2013, Bhasia et al. [36] conducted a rigorous 3-week pacebocontrolled study of 76 healthy men (21-50 years),that looked at the safety and tolerability of LGD-4033. During this study participants were randomized to placebo,0,1.0,3 or 1mg LGD-4033 for 21 days. The study evaluated the safety, tolerability, pharmacokinetics and the effects of ascending doses of LGD-4033 on lean body mass, muscle strength, stair climbing power and sex hormones [39]. The sample size was still small and the study was not based on considerations of effect sizes, as the study’s primary aim was to establish safety and tolerability, rather than efficacy. Similarly, the 3-week study duration was not designed to demonstrate maximal effects on muscle mass and strength. Therefore larger and longer studies are needed to access the efficacy of LGD-4033. Furthermore the study was supported by Ligand Pharmaceuticals, who developed LGD-4033.
Ligandrol showed a dose-dependent suppression of total testosterone from baseline to 21 days, rather than an increase. Ligandrol did not result in fat loss in this study. It promoted muscle growth, but the evidence is very early weak evidence at this stage. There was an increase in lean body mass, that was doserelated. The mechanisms by which androgens increase muscle mass remain incompletely understood. However, the increase in strength measured by stair climbing speed and power also showed improvement, but not enough to be statistically significant. With a larger sample size and or longer study, it is possible that this effect may be demonstrated. LGD-4033 displayed an immediate effect on hormones in the body from the time it was taken. The research showed gains in lean muscle mass within the 21 days of the study. Adverse effects were not noted. LGD-4033 displayed a prolonged elimination half- life of 24-36 hours. Upon discontinuation of LGD- 4033 the hormone levels returned to baseline by day 56 [39]. There is just not enough research to show the efficacy of Ligandrol at this stage,despite it was safe and well tolerated at all doses administered.

Ostarine (MK-2866,Enobosarm)

Merck presented the results of a phase2 clinical trial evaluating Ostarine (MK-2866),an investigational SARM in patients with cancer induced muscle loss,also known as cancer cachexia at the Endocrine Society Annual Meeting in Washington in 2009 [40]. In this study 159 cancer patients with non-small cell lung cancer, colorectal cancer, non-Hodgkins lymphoma, chronic lymphocytic leukemia or breast cancer were randomized. Participants received placebo,1mg or 3mg Ostarine daily for 16 weeks. Average weight loss prior to entry was 8,8 percent and patients were allowed to receive standard chemotherapy during the trial. The drop-out rate during the trial was 33%.
Ostarine treatment led to statistically significant increases in lean body mass (LBM) and improvement in muscle performance measured by stair climbing in patients with cancer cachexia compared to baseline in both the Ostarine 1mg and 3mg cohorts. In the study Ostarine met the primary endpoint of LBM mesured by DEXA(dual energy x-ray absorptiometry) scan, by demonstrating significant increases in LBM compared to baseline in both the Ostarine 1mg and 3mg treatment was o,1kg(p=0,874 compared to baseline, 1,5kg(p=0,001) and 1,3 kg(p=0,0045) at the end of the 16 week trial.
The study also met the secondary endpoint of muscle function as measured by a 12 step stair climbing test measuring speed and calculating power with each Ostarine treatment arm demonstrating a statistically significant average decrease in time to completion and average percentage increase in power exerted.The change from baseline in stair climb power in the placebo,1mg,and 3mg treatment groups was 0,23 Watts (p=0,66,compared to baseline)8,4Watts (p=0,002) and 10,1 Watts (p=0,001),respectively. A critical appraisal results in the same critics,as depicted for the Ligandrol results. Ostarine is also known as Enobosarm and S-22 SARM by various licensing contracts in the body building world.
Ostarine had already shown significant improvement in the ability of healthy,elderly men and women to climb stairs in a phase2A study in 2007.Elderly men and women improved climbing stairs in speed and power,accompanied by significant increases in LBM and decreases in fat mass after only 86 days [41]. Enobasarm (GTx-024,Ostarine and S-22) is the most well characterized clinically and has consistently demonstrated increases in LBM and better physical function across several populations, along with a lower hazard ratio for survival in cancer patients. Enobosarm was eveluated in the POWER 1(Prevention and Treatment of Muscle Wasting in Patients with Cancer)and in the POWER 2 trial. These are the first phase 3 trials for a SARM. Full results from these studies will soon be published and will guide the development of future anabolic trials [42].

Andarine (S4)

Andarine(S4) was studied in 120 ovariectomized rats for 120 days. The study found that treatment with S4 (Andarine) was benificial to maintain cortical bone content and whole body and trabecular bone mineral density (BMD) measured by DEXA scan. The S4 treatment also decreased body fat and increased body strength in these animals. It was further disclosed by this study that S4 had the ability to reduce the incidence of fractures via minimizing the incidence of falls, through increased muscle strength and through direct effects on bone,as compared to current therapies that are primarily antiresorptive in nature. The study also found that dosages of S4 were effective to increase LBM and reduce body fat in intact and ovariectomized rats. It was also revealed that Andarine provides the unique potential to prevent bone resorption, increase skeletal muscle mass and strength positions and promotes bone anabolism, that makes it a possible new alternative for the treatment of osteoporosis [43]. To date there are no clinical human studies of Andarine in osteoporosis.Andarine has a half-life of 4-6 hours and is prized for weight loss and building and repair of muscle as a muscle boosting supplement in the fitness community.

RAD 140 (Teslolone)

RAD 140 is a SARM that stimulates muscle weight increases at a lower dose than that required to stimulate prostate weight. It results in the expected lowering of lipids (LDL,HDL,triglycerides),without elevation of liver enzyne transaminase levels.RAD 140 has excellent pharmacokinetic properties and is a potent anabolic [44]. RAD 140 is a potent AR agonist in breast cancer cells with a distinct mechanism of action,including the AR-mediated repression of estrogen receptor1 (ESR1).It inhibits the growth of multiple A ER+ breast cancer PDX (patient –derived xenograft) models as a single agent,and in combination with palbociclib.These preclinical data present support for further investigation of RAD 140 in A ER+ breast cancer patients [45].
RAD 140 is a SARM that stimulates muscle weight increases at a lower dose than that required to stimulate prostate weight. It results in the expected lowering of lipids (LDL,HDL,triglycerides),without elevation of liver enzyne transaminase levels.RAD 140 has excellent pharmacokinetic properties and is a potent anabolic [44]. RAD 140 is a potent AR agonist in breast cancer cells with a distinct mechanism of action,including the AR-mediated repression of estrogen receptor1 (ESR1).It inhibits the growth of multiple A ER+ breast cancer PDX (patient –derived xenograft) models as a single agent,and in combination with palbociclib.These preclinical data present support for further investigation of RAD 140 in A ER+ breast cancer patients [45].
In the fitness community Testolone is seen as one of the latest additions to the line of SARMs. Testolone is developed by Radius Health Company. The increase in LBM and fat loss are highly appreciated, as its androgenic-anabolic ratio of 90:1, compared to testosterone. Recommended dosages of Testolone vary from 20- 30 mg once daily and it is used in cycles of 12-14 weeks duration. Because Testolone does not interact with the aromatase enzyme and is not liver toxic, no adverse effects are claimed. The half-life of Testolone is estimated 12-18 hours.

Cardarine(GW 501516) and SR 9009 (Stenab0olic)

These two preparations are usually grouped with the SARMs in the fitness community, but are not the same. Cardarine is used as an enhancing running endurance supplement. Cardarine is not a SARM, but a peroxisome proliferative activated receptor-omega agonist (PPAR-omega), that increases PPAR-omega, and regulates muscle metabolism and reprograms muscle fibre types to enhance running training endurance. While training alone increases the exhaustive running performance Cardarine treatment enhances running endurance and the proportion of succinate dehydrogenase(SDH)- positive muscle fibres in both trained and untrained mice. It appeared while training increases energy availability by promoting protein catabolism and gluconeogenesis, Cardarine enhances specific consumption of fatty acids and reduces glucose utilisation [47]. In the fitness community Carderine is regarded as”king of the gym”. Half-life is between 16-24 hours and it should be taken at 10 mg once a day or twice daily. It is claimed to be useful in conjunction with anabolics and stimulants of any kind without adverse reactions in 12-14 week cycles.
SR 9009 (Stenabolic) is a REV-ERB (revised-viral nuclear erythroblastosis receptors) agonist, that can modulate the expressions of circadian core clock proteins and therefore help to modulate the circadian rythm. Modulation of the REV-ERB activity by synthetic agonists e.g., SR 9009 SR 9011 alters the expression of genes involved in lipid and glucose metabolism and, therefore plays an important role in maintaining the energy homeotasis. Effects of SR9009 and SR9011 in animal studies are increased basal oxygen consumption, decreased lipogenesis, cholesterol and bileacid synthesis in the liver, increased mitochondrial content, glucose and fatty oxidation in the skeletal muscle and decreased lipid storage in the white adipose tissue. The observed increase in energy expenditure and decrease in fat mass make the REV-ERB agonists promising drug candidates for the treatment of several metabolic disorders.They are also attractive for performance enhancement by athletes. Such use can be classified as doping [48].
SR9009 (Stenabolic) has been developed by Scripps Research by the team of Prof. Thomas Burris. Stenabolic is taken orally as a metabolism enhancer in the fitness community. It is believed to have results similar to Cardarine, but with considerable more extra benefits. It is recommended as a very good addition to any steroid (Anavar or Trembolone) or SARMs cycle, especially when used together with Cardarine. The half-life is short, 30-60 minutes,so the dose should be spaced through the day e.g.,10 mg 4-6 times daily. Again no adverse effects are reported.

Illicit use of SARMs

Recently, the FDA issued a consumer warning letter against supplement-like bodybuilding products,that contain SARMs. The FDA warning came on the heels of warning letters sent to three companies, that market products containing the ingredients. FDA had this to say about the offending products distributed by Infantry Labs LLC,Iron Mag Labs and Panther Sports Nutrition: “ Although the products identified in the warning letters are marketed and labeled as dietary supplements, they are not dietary supplements. The products are unapproved drugs, that have not been reviewed by the FDA for safety and effectiveness” [49]. FDA told consumers among the dangers associated with SARMs are liver toxicity and the potential to increase the risk of heart attack and stroke.But the agency said the long- term effects of these substances are unknown. However, these FDA health risk statements can not be supported by the few small clinical human phase 1 and2 SARMs studies performed and the ongoing POWER trials. Furthermore, the FDA did not mention that Ostarine and Ligandrol have previously been investigated as new drugs, which makes them ineligible for use as dietary supplements.
Nevertheless, as clinical research of SARMs is slow, we are now in the wonderful situation the real world clinical SARMs experience is now represented by the fitness and body building world. It is estimated that ther are between 2 and 4 million young people in the U.S. alone, who have used performance-enhancing drugs sometime in their life. There are thousands of internet sites offering SARMs in and outside the U.S [50]. So the magnitude of the problem is completely unknown,if there is any problem at all. In general,these young people are very concerned about their health and “looks” and have the good right of their own responsability.
A recent JAMA publication found that the chemical analysis of 44 products sold via the internet as SARMs revealed, that only 52% contained SARMs and another 39% contained another unapproved drug. In addition, 25% of products contained substances not listed on the label, 9 percent did not contain an active substance and 59% contained substance amounts,that differed from the label [50]. Although these figures must be frightening,there is no registered SARMs epidemic at the U.S. emergency rooms. At present the biggest problems are the “loopholes” in the FDA regulation of dietary supplements.

Conclusion

The SARMs were discovered in the late 1990’s. Clinical development is slow. Few human phase 1 and 2 clinical studies are available Results of the phase 3 POWER trials,studying SARMs in wasting, are awaiting and will guide the development of future anabolic trials. Until now no SARM has received FDA approval. Due to “loopholes” in the FDA regulations the SARMs are widespread used as dietary supplements in the fitness community and body building world. This results in the wonderful situation the clinical experience with SARMs is represented by illicit SARMs use and not by clinical science
For more Lupine Publishers Open Access Journals please visit our website https://lupinepublishers.us/
For more open access Journal of Reproductive System and Sexual Disorders articles please click here https://lupinepublishers.com/reproductive-medicine-journal/index.php
Follow on Twitter : https://twitter.com/lupine_online
Follow on Blogger : https://lupinepublishers.blogspot.com/
submitted by Lupinepublishers-RSD to u/Lupinepublishers-RSD [link] [comments]

sample size calculator using hazard ratio video

2. Sample Size Calculation – Basic Formula - YouTube How to calculate sample size for two independent ... Power and Sample Size Calculations for Survival Analysis ... Sample Size Calculation for Logrank Tests in PASS - YouTube How to calculate sample size and margin of error - YouTube Sample size calculation for Cox regression using the ... Calculating sample size and power - YouTube Using SAS Power and Sample Size Application - YouTube Calculate A Sample Size of A proportion - YouTube sample size calculation - YouTube

Hazard Ratio Calculator. Use this hazard ratio calculator to easily calculate the relative hazard, confidence intervals and p-values for the hazard ratio (HR) between an exposed/treatment and control group. One and two-sided confidence intervals are reported, as well as Z-scores based on the log-rank test. Sample size estimation in clinical research: from randomized controlled trials to observational studies. Chest, 158(1), pp.S12-S20. Wang, X. and Ji, X., 2020. Sample size formulas for different study designs: supplement document for sample size estimation in clinical research. You can use this free sample size calculator to determine the sample size of a given survey per the sample proportion, margin of error, and required confidence level This free sample size calculator determines the sample size required to meet a given set of constraints. Learn more about population standard deviation, or explore other statistical calculators, as well as hundreds of other calculators addressing math, finance, health, fitness, and more. Calculate Sample Size Needed to Test Time-To-Event Data: Cox PH, Equivalence. You can use this calculator to perform power and sample size calculations for a time-to-event analysis, sometimes called survival analysis. A two-group time-to-event analysis involves comparing the time it takes for a certain event to occur between two groups. The sample size calculated for a crossover study can also be used for a study that compares the value of a variable after treatment with it's value before treatment. The standard deviation of the outcome variable is expressed as either the within patient standard deviation or the standard deviation of the difference. Sample size – Survival analysis This project was supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through UCSF-CTSI Grant Numbers UL1 TR000004 and UL1 TR001872. Sample Size Calculator Determines the minimum number of subjects for adequate study power ClinCalc.com » Statistics » Sample Size Calculator Sample Size Calculators. If you are a clinical researcher trying to determine how many subjects to include in your study or you have another question related to sample size or power calculations, we developed this website for you.

sample size calculator using hazard ratio top

[index] [1954] [2470] [7999] [8244] [4232] [3734] [261] [938] [5670] [8099]

2. Sample Size Calculation – Basic Formula - YouTube

Introduction to Sample Size CalculationTraining session with Dr Helen Brown, Senior Statistician, at The Roslin Institute, January 2016.*****... In this tutorial I show the relationship between sample size and margin of error. I calculate the margin of error and confidence interval using three differ... Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. This video is showing how to do a sample size calculation for Cox regression using the software PASS. In the example we are looking for healing of a diabetic... How to calculate a sample size for a proportion (percentage). Includes discussion on how sample changes as proportions (percentages) change.Calculating Samp... Power and Sample Size Calculations for Survival Analysis In this webinar you will learn about: Why Survival Analysis planning requires a different approach: ... There are a variety of procedures in PASS for examining sample size and power for the comparison of two survival curves using the Logrank test. The procedure... Learn how to do a sample size calculation for comparing sample proportions from two independent samples in terms of odds ratios using Stata. Copyright 2011-2... Calculates the required sample size for a certain confidence. About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features Press Copyright Contact us Creators ...

sample size calculator using hazard ratio

Copyright © 2024 m.newtoy.site